comp.lang.ada
 help / color / mirror / Atom feed
* {Pre,Post}conditions and side effects
@ 2014-12-22 16:22 Jean François Martinez
  2014-12-22 17:18 ` Brad Moore
                   ` (2 more replies)
  0 siblings, 3 replies; 107+ messages in thread
From: Jean François Martinez @ 2014-12-22 16:22 UTC (permalink / raw)


First time I read about Pre-Post conditions in Ada 2012 I felt some disconfort about functions invoked in pre/post conditions being allowed to have side effects.  Tus it is possible to have programs that work when checks are enabled and break when they are disabled that because the side effets "make them work".   I don't like "solutions" like "the compiler will allow you to soot yourself in the foot so be careful".  That is C not Ada

Perhaps it would have been a good idea to have a No_Side_Effects aspect and only functions labelled with this pragma would be allowed in pre/post conditions.  Such functions would not be able to modify global variables and would be restricted to invoking only functions/procedures marked with this aspect.   This is somewhat akin with the restrictions for packages marked with elaboration pragmas or with pragmas realated to distributed system annex. In both a package may only with a "lower or equal" package.

First problem I see is that I/O functions have side effect so when debugging No_Side_Effects_functions it would impossible to use the good-old method of inseting Puts in  the code.

SEcond and biggest problem is that it is too late.

Jean François Martinez


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 16:22 {Pre,Post}conditions and side effects Jean François Martinez
@ 2014-12-22 17:18 ` Brad Moore
  2014-12-23  8:22   ` Jean François Martinez
  2014-12-23 21:53   ` Florian Weimer
  2014-12-22 19:38 ` sbelmont700
  2014-12-22 23:32 ` Randy Brukardt
  2 siblings, 2 replies; 107+ messages in thread
From: Brad Moore @ 2014-12-22 17:18 UTC (permalink / raw)


On 14-12-22 09:22 AM, Jean François Martinez wrote:
> First time I read about Pre-Post conditions in Ada 2012 I felt some disconfort about functions invoked in pre/post conditions being allowed to have side effects.  Tus it is possible to have programs that work when checks are enabled and break when they are disabled that because the side effets "make them work".   I don't like "solutions" like "the compiler will allow you to soot yourself in the foot so be careful".  That is C not Ada
>
> Perhaps it would have been a good idea to have a No_Side_Effects aspect and only functions labelled with this pragma would be allowed in pre/post conditions.  Such functions would not be able to modify global variables and would be restricted to invoking only functions/procedures marked with this aspect.   This is somewhat akin with the restrictions for packages marked with elaboration pragmas or with pragmas realated to distributed system annex. In both a package may only with a "lower or equal" package.

There is a proposal to have a Global aspect to be considered for Ada 
202x, that could be applied to subprograms. Such an aspect would 
indicate if a subprogram body referenced any global variables. It would 
also indicate if a subprogram references no globals.

eg.
        function Foo (X : Integer) return Integer
              with Globals => null;

I believe such a subprogram would not have side-effects

>
> First problem I see is that I/O functions have side effect so when debugging No_Side_Effects_functions it would impossible to use the good-old method of inseting Puts in  the code.

Agreed. I suppose you could comment out the Globals => null during 
debugging, and uncomment it once the debugging had been removed.

>
> SEcond and biggest problem is that it is too late.

Too late for Ada 2012, but not for Ada 202X. Or you could consider using 
SPARK 2014 for this code, as it already has this Global aspect. Or you 
might ask your compiler vender to provide an implementation specific 
solution.

Brad


>
> Jean François Martinez
>

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 16:22 {Pre,Post}conditions and side effects Jean François Martinez
  2014-12-22 17:18 ` Brad Moore
@ 2014-12-22 19:38 ` sbelmont700
  2014-12-22 19:59   ` Brad Moore
                     ` (2 more replies)
  2014-12-22 23:32 ` Randy Brukardt
  2 siblings, 3 replies; 107+ messages in thread
From: sbelmont700 @ 2014-12-22 19:38 UTC (permalink / raw)


On Monday, December 22, 2014 11:22:29 AM UTC-5, Jean François Martinez wrote:
> Perhaps it would have been a good idea to have a No_Side_Effects aspect and only functions labelled with this pragma would be allowed in pre/post conditions.  Such functions would not be able to modify global variables and would be restricted to invoking only functions/procedures marked with this aspect.   This is somewhat akin with the restrictions for packages marked with elaboration pragmas or with pragmas realated to distributed system annex. In both a package may only with a "lower or equal" package.
> 

It's more than just global variables, though.  Your condition might be thusly ludicrous:

function P return Boolean is
   x : Some_Ptr_Type := new Integer'(42);
begin
   return True;
exception
   when others => Some_Task.Blocking_Rendezous;
end P;

Now you've got a function that doesn't access any global variables, but not only randomly exhaust the heap, but also permanently blocks the thread, so you are right back where you started.  You end up needing all the same rules and restrictions as pragma Pure, which means you might as well just be beholden to pragma Pure to begin with (or use qualified expressions).

But more to the point, there is no escaping that disabling/enabling checks will cause the program to function differently; that is, after all, the entire point.  What constitutes 'correctness' is only in the eyes of the programmer.  Ergo:

procedure Some_Subprogram is null with Pre => false;

procedure Insane is
begin
  Some_Subprogram;
  raise Program_Error;
exception
  when others => Continue_Running;
end Insane;

No globals, but it still "breaks" when checks are disabled.

-sb


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 19:38 ` sbelmont700
@ 2014-12-22 19:59   ` Brad Moore
  2014-12-22 23:46   ` Randy Brukardt
  2014-12-23 10:41   ` Georg Bauhaus
  2 siblings, 0 replies; 107+ messages in thread
From: Brad Moore @ 2014-12-22 19:59 UTC (permalink / raw)


On 14-12-22 12:38 PM, sbelmont700@gmail.com wrote:
> On Monday, December 22, 2014 11:22:29 AM UTC-5, Jean François Martinez wrote:
>> Perhaps it would have been a good idea to have a No_Side_Effects aspect and only functions labelled with this pragma would be allowed in pre/post conditions.  Such functions would not be able to modify global variables and would be restricted to invoking only functions/procedures marked with this aspect.   This is somewhat akin with the restrictions for packages marked with elaboration pragmas or with pragmas realated to distributed system annex. In both a package may only with a "lower or equal" package.
>>
>
> It's more than just global variables, though.  Your condition might be thusly ludicrous:
>
> function P return Boolean is
>     x : Some_Ptr_Type := new Integer'(42);
> begin
>     return True;
> exception
>     when others => Some_Task.Blocking_Rendezous;
> end P;
>
> Now you've got a function that doesn't access any global variables, but not only randomly exhaust the heap, but also permanently blocks the thread, so you are right back where you started.  You end up needing all the same rules and restrictions as pragma Pure, which means you might as well just be beholden to pragma Pure to begin with (or use qualified expressions).

The value designated by X is a global variable though.

In the proposal for the Global aspect which appeared in a paper at the 
most recent HILT conference, you would not be allowed to specify

function P return Boolean
    with Global => null;

for this case, as the value designated by Some_Ptr_Type actually is a 
global variable.

You could however write;

   function P return Boolean
      with Global => (in out => Some_Ptr_Type);

However, this function now obviously has side effects


>
> But more to the point, there is no escaping that disabling/enabling checks will cause the program to function differently; that is, after all, the entire point.  What constitutes 'correctness' is only in the eyes of the programmer.  Ergo:
>
> procedure Some_Subprogram is null with Pre => false;
>
> procedure Insane is
> begin
>    Some_Subprogram;
>    raise Program_Error;
> exception
>    when others => Continue_Running;
> end Insane;
>
> No globals, but it still "breaks" when checks are disabled.

I don't see anything being broken here. Some_Program still honours it's 
contract, and does what it says it will do. The contract of Some_Program 
doesn't impact the Insane procedure's contract. You haven't shown any 
contract for Insane, so for all we know, it has no contract, and do 
anything, including what you have here.


Brad

>
> -sb
>

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 16:22 {Pre,Post}conditions and side effects Jean François Martinez
  2014-12-22 17:18 ` Brad Moore
  2014-12-22 19:38 ` sbelmont700
@ 2014-12-22 23:32 ` Randy Brukardt
  2015-04-24 17:59   ` Shark8
  2 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2014-12-22 23:32 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1947 bytes --]

"Jean François Martinez" <darkquark99@gmail.com> wrote in message 
news:2430252d-52a1-4609-acef-684864e6ca0c@googlegroups.com...
>First time I read about Pre-Post conditions in Ada 2012 I felt some 
>disconfort about
>functions invoked in pre/post conditions being allowed to have side 
>effects.  Tus it is
>possible to have programs that work when checks are enabled and break when 
>they
>are disabled that because the side effets "make them work".   I don't like 
>"solutions"
>like "the compiler will allow you to soot yourself in the foot so be 
>careful".  That is C
>not Ada
>
>Perhaps it would have been a good idea to have a No_Side_Effects aspect and 
>only
>functions labelled with this pragma would be allowed in pre/post 
>conditions.

This was proposed with the original proposal. It was eventually dropped 
because it is too hard to define what a side-effect actually is (easy 
definitions are too fierce).

We tried to invent a "Global" aspect for Ada 2012, but it was considered too 
complex and too immature for Ada 2012. We're going to try again (with the 
parallel effort) for the next version of Ada.

Second, we did insert a rule which gives an implementation permission to 
reject assertions with result-changing side-effects. (11.4.2(27/3)). One 
hopes that as compiler technology improves, implementations will take some 
advantage of this to reject uses that clearly cause trouble.

Third, we also introduced expression functions so that the compiler can see 
more (or all) of the parts of an assertion. We hoped that would allow more 
static analysis (including rejection of bad side-effects).

To summarize, we spent a lot of time agonizing about this particular issue, 
wording solutions, and the like. It's most certainly not the case that it 
wasn't considered, but more that we felt that the alternative of strong 
restrictions would be barely usable.

                               Randy Brukardt, ARG Editor.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 19:38 ` sbelmont700
  2014-12-22 19:59   ` Brad Moore
@ 2014-12-22 23:46   ` Randy Brukardt
  2014-12-23 10:41   ` Georg Bauhaus
  2 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2014-12-22 23:46 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3783 bytes --]

<sbelmont700@gmail.com> wrote in message 
news:c17f6550-1cc8-4ef7-8477-4512d16d294a@googlegroups.com...
On Monday, December 22, 2014 11:22:29 AM UTC-5, Jean François Martinez 
wrote:
>> Perhaps it would have been a good idea to have a No_Side_Effects aspect 
>> ...

>It's more than just global variables, though.  Your condition might be 
>thusly ludicrous:
>
>function P return Boolean is
>   x : Some_Ptr_Type := new Integer'(42);

The storage pool of Some_Ptr_Type is a global variable. It would have to be 
mentioned in the Global aspect, else this allocator is illegal.

>begin
>   return True;
>exception
>   when others => Some_Task.Blocking_Rendezous;

There's also an aspect "Potentially_Blocking" under development, to cover 
this case.

>end P;

>Now you've got a function that doesn't access any global variables, but not 
>only randomly
>exhaust the heap,

As noted, the heap is a global variable, so that couldn't happen with Global 
=> null.

>but also permanently blocks the thread, so you are right back where you 
>started.

That's already a Bounded Error (see 11.4.2(23.1/3)). One presumes that if 
pragma Detect_Blocking is used, it will always raise Program_Error. Plus 
there would be the aspect noted above, for static checking.

>You end up needing all the same rules and restrictions as pragma Pure, 
>which means
>you might as well just be beholden to pragma Pure to begin with (or use 
>qualified
>expressions).

Actually, you need quite a bit more that Pure. Pure allows dereferences of 
access values, which can't be allowed willy-nilly (the pool is a global 
variable, after all). Plus Pure only applies to complete packages, which is 
way too unwieldy for assertions (typically an assertion depends on predicate 
functions defined with an ADT, we surely don't want to have to make the 
entire ADT Pure just to use assertions).

>But more to the point, there is no escaping that disabling/enabling checks 
>will cause the
>program to function differently; that is, after all, the entire point. 
>What constitutes
>'correctness' is only in the eyes of the programmer.  Ergo:

You're right here, and I've always thought that is a problem. It's one 
reason that a package author can prevent some or all assertions from being 
disabled in the package.

>procedure Some_Subprogram is null with Pre => false;
>
>procedure Insane is
>begin
>  Some_Subprogram;
>  raise Program_Error;
>exception
>  when others => Continue_Running;
>end Insane;
>
>No globals, but it still "breaks" when checks are disabled.

A much better example is an implementation that has implemented the 
containers using preconditions rather than explicit checks. For instance:

    procedure Replace_Element (Container : in out Vector;
                           Position  : in     Cursor;
                           New_Item  : in     Element_Type);
   with Pre'Class  => (if Tampering_With_Elements_Prohibited (Container)
                       then raise Program_Error) and then
                      (if Position = No_Element then raise Constraint_Error) 
and then
                      (if not Cursor_Belongs_To_Container (Container, 
Position)
                       then raise Program_Error;

If this is called with preconditions ignored, the required semantives of 
Replace_Element won't happen [because the checks for the various exceptions 
won't happen - no one is going to repeat the precondition checks in the 
body - if that was required, the precondition is worthless]. [Note: This 
precondition uses a couple of predicates that aren't defined in the Ada 2012 
containers, but should have been. Most likely, the next version of Ada will 
rewrite the containers this way, it will get rid of a lot of text in the 
Standard.]

                                      Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 17:18 ` Brad Moore
@ 2014-12-23  8:22   ` Jean François Martinez
  2014-12-23 17:05     ` Robert A Duff
  2014-12-23 21:53   ` Florian Weimer
  1 sibling, 1 reply; 107+ messages in thread
From: Jean François Martinez @ 2014-12-23  8:22 UTC (permalink / raw)


On Monday, December 22, 2014 6:18:25 PM UTC+1, Brad Moore wrote:
> On 14-12-22 09:22 AM, Jean François Martinez wrote:

> > SEcond and biggest problem is that it is too late.
> 
> Too late for Ada 2012, but not for Ada 202X. Or you could consider using 
> SPARK 2014 for this code, as it already has this Global aspect. Or you 
> might ask your compiler vender to provide an implementation specific 
> solution.
> 
> Brad
> 
> 

By too late I meant that once something is allowed it is difficult to make it illegal since a number of programs would no longer compile. Now functions invoked in pre/post consitions are only supposed to check parms so most of the time they should be quite simple and  sensible programmers would avoid side effects (except a few debugging Puts) but I still feel uncomfortable.about this Cish "be careful while playing russian roulette" philosphy.  At least the Rationale should have brought this point to attention.  Author must have assummed we were bright enough to figure it.  :)

Jean François Martinez


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 19:38 ` sbelmont700
  2014-12-22 19:59   ` Brad Moore
  2014-12-22 23:46   ` Randy Brukardt
@ 2014-12-23 10:41   ` Georg Bauhaus
  2 siblings, 0 replies; 107+ messages in thread
From: Georg Bauhaus @ 2014-12-23 10:41 UTC (permalink / raw)


On 22.12.14 20:38, sbelmont700@gmail.com wrote:
> Now you've got a function that doesn't access any global variables, but not only randomly exhaust the heap, but also permanently blocks the thread, so you are right back where you started.

A function call can "cause" even more, pure or not. Any Ada
check can have effects.

Add to Randy Brukardt's response that everything manipulates
"the program"'s state. Invoking any function can also exhaust
resources, e.g. hit the stack limit. But that's then the catch
all Standard.$Runtime$_Error.

However, there are two notions to keep in mind, I think:

1. Contract based design, or at least Design by Contract™, entails
a proof obligation(*).  Without it, the former is sill a very useful
debugging tool, or testing tool.

2. The model of contract based design should not be confused with
its implementation, I think: For example, the checks may be performed
on a multiprocessor computer in such a way that one of the processors
checks a copy of a sequential program, while the other does not check.

There might be theoretical corner cases where duplication is not
possible. It might be impractical to detect important differences
in the copies' states, but you get the idea, in particular with (1)
in mind: proof obligations.

In fact, a duplicated setup does not seem to be entirely dissimilar
from redundant installations already in use?

__
(*) At Ada Europe 2012, Betrand Mayer explains that Eiffel programs
in production are supposed to run with checks disabled.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23  8:22   ` Jean François Martinez
@ 2014-12-23 17:05     ` Robert A Duff
  2014-12-23 21:09       ` Jean François Martinez
  0 siblings, 1 reply; 107+ messages in thread
From: Robert A Duff @ 2014-12-23 17:05 UTC (permalink / raw)


Jean François Martinez <darkquark99@gmail.com> writes:

> By too late I meant that once something is allowed it is difficult to
> make it illegal since a number of programs would no longer
> compile.

I can think of ways to do what you want without any incompatibility.

>... Now functions invoked in pre/post consitions are only
> supposed to check parms...

I don't agree with that.  It sometimes makes sense for pre/post to
check values of global variables.  Probably not often -- after all,
it's not often that you should have global variables in the first place.
But surely if the job of a procedure is to update some global,
it makes sense to have a postcondition ensuring that it did so
correctly.

Modifying globals in pre/post/predicate/invariant is another story,
and that's what you (correctly) complained about.

>...so most of the time they should be quite
> simple and sensible programmers would avoid side effects (except a few
> debugging Puts) but I still feel uncomfortable.about this Cish "be
> careful while playing russian roulette" philosphy.  At least the
> Rationale should have brought this point to attention.  Author must
> have assummed we were bright enough to figure it.  :)

Well, apparently YOU were bright enough to figure it out.  ;-)

- Bob

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23 17:05     ` Robert A Duff
@ 2014-12-23 21:09       ` Jean François Martinez
  2014-12-23 21:35         ` Robert A Duff
  2014-12-23 23:02         ` Peter Chapin
  0 siblings, 2 replies; 107+ messages in thread
From: Jean François Martinez @ 2014-12-23 21:09 UTC (permalink / raw)


On Tuesday, December 23, 2014 6:05:33 PM UTC+1, Robert A Duff wrote:

> 
> I don't agree with that.  It sometimes makes sense for pre/post to
> check values of global variables.  Probably not often -- after all,
> it's not often that you should have global variables in the first place.
> But surely if the job of a procedure is to update some global,
> it makes sense to have a postcondition ensuring that it did so
> correctly.
> 

That was clumsy (I was in a hurry) wording of my part.   Should have been "a pre/postconsition/invariant" usually don't involve redoing the whole calculation but just checking values of "data entities" both in the absolute and relativly to one another.  Most of the tilme you don't write pre/postonditions/invariants involving (with functions cvalled) tens of thousands of programming lines and gazillions of CPU cycles

Jean-François Martinez


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23 21:09       ` Jean François Martinez
@ 2014-12-23 21:35         ` Robert A Duff
  2014-12-23 23:02         ` Peter Chapin
  1 sibling, 0 replies; 107+ messages in thread
From: Robert A Duff @ 2014-12-23 21:35 UTC (permalink / raw)


Jean François Martinez <darkquark99@gmail.com> writes:

> On Tuesday, December 23, 2014 6:05:33 PM UTC+1, Robert A Duff wrote:
>
>> I don't agree with that.  It sometimes makes sense for pre/post to
>> check values of global variables.  Probably not often -- after all,
>> it's not often that you should have global variables in the first place.
>> But surely if the job of a procedure is to update some global,
>> it makes sense to have a postcondition ensuring that it did so
>> correctly.
>
> That was clumsy (I was in a hurry) wording of my part.  Should have
> been "a pre/postconsition/invariant" usually don't involve redoing the
> whole calculation but just checking values of "data entities" both in
> the absolute and relativly to one another.  Most of the tilme you
> don't write pre/postonditions/invariants involving (with functions
> cvalled) tens of thousands of programming lines and gazillions of CPU
> cycles

Yes, I agree.  The best assertions are much shorter than the executable
code.  For example:

    function F return Integer is (123) with Post => F'Result = 123;

If you manage to prove that postcondition, you haven't accomplished
much.  Likewise, if 123 were replaced (in both places) with 1000 lines
of code, you still haven't accomplished much -- it's just as likely
there's a bug in the postcondition as in the function itself, especially
if the postcondition duplicates the function, which I assume is what you
meant by "redoing the whole calculation" above.

But if the function is long, but its postcondition is short, then
proving the postcondition is a big win.  The goal is to somehow
characterize what the function is supposed to do more concisely than the
code that does it.  Not always easy.

- Bob


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 17:18 ` Brad Moore
  2014-12-23  8:22   ` Jean François Martinez
@ 2014-12-23 21:53   ` Florian Weimer
  2014-12-24 11:41     ` Brad Moore
  1 sibling, 1 reply; 107+ messages in thread
From: Florian Weimer @ 2014-12-23 21:53 UTC (permalink / raw)


* Brad Moore:

> There is a proposal to have a Global aspect to be considered for Ada
> 202x, that could be applied to subprograms. Such an aspect would
> indicate if a subprogram body referenced any global variables. It
> would also indicate if a subprogram references no globals.
>
> eg.
>        function Foo (X : Integer) return Integer
>              with Globals => null;
>
> I believe such a subprogram would not have side-effects

It could still call subprograms which modify global variables, or use
its arguments to modify global state.

Even if you solve this (different, quite feasible approaches exist),
(lack of) termination of the evaluation of conditions would still be a
visible side effect, and that's more difficult to tackle.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23 21:09       ` Jean François Martinez
  2014-12-23 21:35         ` Robert A Duff
@ 2014-12-23 23:02         ` Peter Chapin
  2014-12-24  1:03           ` Robert A Duff
  1 sibling, 1 reply; 107+ messages in thread
From: Peter Chapin @ 2014-12-23 23:02 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1792 bytes --]

On Tue, 23 Dec 2014, Jean François Martinez wrote:

> Most of the time you don't write pre/postonditions/invariants involving 
> (with functions called) tens of thousands of programming lines and 
> gazillions of CPU cycles

I think, actually, it is fairly common for assertions to require 
"gazillions" of CPU cycles. In fact, it is easy to come up with examples 
where the pre- and postconditions take far longer to execute than the 
subprogram to which they are attached.

A simple example that I love because it is so simple yet so telling is 
binary search of an array. A reasonable precondition is that the array it 
is given is sorted. It might look like

     Pre => (for all I in A'First .. A'Last - 1 => (A(I) <= A(I + 1)))

This takes O(n) time to evaluate. Yet binary search is an O(log(n)) 
algorithm. For large arrays the precondition might take many thousands or 
even millions of times longer to execute than the subprogram itself.

I don't think there is anything particularly unusual about this example. 
Here's another case: suppose you had a procedure that added an item to 
some kind of balanced binary tree. The postcondition might assert that the 
tree is still balanced in whatever sense is appropriate... probably an 
O(n) operation. Yet the insertion procedure is probably O(log(n)). It's 
easy to imagine many examples.

For this reason I assume that in most cases programs must be deployed with 
assertions disabled or else there is little chance the program will be 
able to meet its performance goals. Thus putting anything resembling 
essential program logic in an assertion is, of course, just wrong. 
Forbidding assertions with side effects might be nice, but the programmer 
still has to be careful with them anyway.

Peter

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23 23:02         ` Peter Chapin
@ 2014-12-24  1:03           ` Robert A Duff
  2015-04-24  8:59             ` Jacob Sparre Andersen
  0 siblings, 1 reply; 107+ messages in thread
From: Robert A Duff @ 2014-12-24  1:03 UTC (permalink / raw)


Peter Chapin <PChapin@vtc.vsc.edu> writes:

> I think, actually, it is fairly common for assertions to require
> "gazillions" of CPU cycles. In fact, it is easy to come up with examples
> where the pre- and postconditions take far longer to execute than the
> subprogram to which they are attached.
>
> A simple example that I love because it is so simple yet so telling is
> binary search of an array. A reasonable precondition is that the array
> it is given is sorted. It might look like
>
>     Pre => (for all I in A'First .. A'Last - 1 => (A(I) <= A(I + 1)))
>
> This takes O(n) time to evaluate. Yet binary search is an O(log(n))
> algorithm. For large arrays the precondition might take many thousands
> or even millions of times longer to execute than the subprogram itself.
>
> I don't think there is anything particularly unusual about this
> example. Here's another case: suppose you had a procedure that added an
> item to some kind of balanced binary tree. The postcondition might
> assert that the tree is still balanced in whatever sense is
> appropriate... probably an O(n) operation. Yet the insertion procedure
> is probably O(log(n)). It's easy to imagine many examples.

Yes, all that's true.  But those would be better as
predicates/invariants instead of preconditions.  For ex., if you know
Sort produces a sorted array, and Binary_Search takes a sorted array,
you don't have to check for sorted-ness on entry to Binary_Search if
you've got a Sorted_Array.

But note that your "Pre" above is a good example of what I was saying in
a somewhat-unrelated post in this thread:  It is shorter and simpler
than doing a sort using some efficient sorting algorithm.

> For this reason I assume that in most cases programs must be deployed
> with assertions disabled or else there is little chance the program will
> be able to meet its performance goals.

Yes, or at least SOME assertions disabled.

I like to say, "If you don't need to disable assertions, then you don't
have enough assertions".  It's a silly sound bite that is not always
true, but there's a grain of truth in it.

>... Thus putting anything resembling
> essential program logic in an assertion is, of course, just
> wrong.

Yes.

>... Forbidding assertions with side effects might be nice, but the
> programmer still has to be careful with them anyway.

Yes.

-- Bob

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-23 21:53   ` Florian Weimer
@ 2014-12-24 11:41     ` Brad Moore
  0 siblings, 0 replies; 107+ messages in thread
From: Brad Moore @ 2014-12-24 11:41 UTC (permalink / raw)


On 2014-12-23 2:53 PM, Florian Weimer wrote:
> * Brad Moore:
>
>> There is a proposal to have a Global aspect to be considered for Ada
>> 202x, that could be applied to subprograms. Such an aspect would
>> indicate if a subprogram body referenced any global variables. It
>> would also indicate if a subprogram references no globals.
>>
>> eg.
>>         function Foo (X : Integer) return Integer
>>               with Globals => null;
>>
>> I believe such a subprogram would not have side-effects
>
> It could still call subprograms which modify global variables, or use
> its arguments to modify global state.

Not with our proposal, because the idea is that the compiler would not 
allow one to apply "with Globals => null" to a subprogram, if it in turn 
called any subprograms that did not also have "with Globals => null" in 
its contract.

>
> Even if you solve this (different, quite feasible approaches exist),
> (lack of) termination of the evaluation of conditions would still be a
> visible side effect, and that's more difficult to tackle.
>

Yes, I suppose it is impossible to have any function that truly has no 
side effects. They all use cycles of the CPU, which could be viewed as 
side effect, since that is a shared resource. The elapsing of time is 
another side effect that cant be eliminated, and any processing of pre 
and post conditions can impact both of these side effects.

I think these may have been be finer grained side effects that the 
original poster was concerned about though.

Brad

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-24  1:03           ` Robert A Duff
@ 2015-04-24  8:59             ` Jacob Sparre Andersen
  2015-04-24  9:18               ` J-P. Rosen
                                 ` (3 more replies)
  0 siblings, 4 replies; 107+ messages in thread
From: Jacob Sparre Andersen @ 2015-04-24  8:59 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> writes:
> Peter Chapin <PChapin@vtc.vsc.edu> writes:

>> ... Thus putting anything resembling essential program logic in an
>> assertion is, of course, just wrong.
>
> Yes.

But when are you putting "essential program logic" in an assertion?

1) subtype Non_Negative_Matrix is Ada.Numerics.Real_Arrays.Real_Matrix
     with Dynamic_Predicate
            => (Non_Negative_Matrix'First (1) = 1) and
               (Non_Negative_Matrix'First (2) = 1) and
               (for all E of Non_Negative_Matrix => E >= 0.0);

2) procedure New_Line (File : in File_Type)
     with Pre => Is_Open (File) and then
                 Mode (File) in (Out_File | Append_File);

3) function Binary_Search (Source : in List;
                           Key    : in Keys) return Values
     with Pre => Sort (Source); --  Sorts Source if it isn't already sorted.

I consider examples (1) and (2) fine, but example (3) a very bad idea.

At the same time, I know that my application may fail silently if the
assertion in example (1) isn't true.

When it comes to example (2), I expect that the operating system (if
nothing else) will make sure that my application doesn't fail silently
if the assertion isn't true.

But I dislike banning "essential program logic" in assertions, as any
assertion is program logic.  And if it isn't essential, why should it be
there?

One problem I have with assertion aspects is that I get the same
exception no matter which mistake I have made.  If I put the check
inside a subprogram instead of in its profile, I can get more clear
information about which kinds of mistakes I make.

Greetings,

Jacob
-- 
"Any politician with a live opposition does not understand
 how to make proper use of the true instruments of politics."

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24  8:59             ` Jacob Sparre Andersen
@ 2015-04-24  9:18               ` J-P. Rosen
  2015-04-24 23:39                 ` Randy Brukardt
  2015-04-24 12:10               ` G.B.
                                 ` (2 subsequent siblings)
  3 siblings, 1 reply; 107+ messages in thread
From: J-P. Rosen @ 2015-04-24  9:18 UTC (permalink / raw)


Le 24/04/2015 10:59, Jacob Sparre Andersen a écrit :
> One problem I have with assertion aspects is that I get the same
> exception no matter which mistake I have made.  If I put the check
> inside a subprogram instead of in its profile, I can get more clear
> information about which kinds of mistakes I make.
Your wishes will be soon satisfied, see AI12-0022-1 and AI12-0054-2
(raise expression and aspect Predicate_Failure)
-- 
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24  8:59             ` Jacob Sparre Andersen
  2015-04-24  9:18               ` J-P. Rosen
@ 2015-04-24 12:10               ` G.B.
  2015-04-24 14:40                 ` Jacob Sparre Andersen
  2015-04-24 22:26               ` Peter Chapin
  2015-04-25  0:31               ` Bob Duff
  3 siblings, 1 reply; 107+ messages in thread
From: G.B. @ 2015-04-24 12:10 UTC (permalink / raw)


On 24.04.15 10:59, Jacob Sparre Andersen wrote:
> But I dislike banning "essential program logic" in assertions, as any
> assertion is program logic.  And if it isn't essential, why should it be
> there?

This definition involving "essential" does not reflect "contract"
properly, IMHO. And it is fallacious in that it fails to reflect
the particulars that make assertions, as in "assertion as per
contract", different from just program logic. To see this, I
think it is helpful to free oneself of the limitations of looking
at assertions from just a programmer's point of view.

Assertions (of the contract) should never, ever be understood
to be consequences of the program text with or without the assertions,
insofar as they are agreed upon by humans to be true about the
program to be, its intent in particular. They cover a program
that may even have to be written yet. They do share some
of the properties of agreements manifest in specifications.

For contracts' clauses, possibly supported by assertions as
part of source text, there isn't even a conceptual necessity
to have proper assertions tested in the very same environment
as the program proper: a copy will do in many cases of
proper assertions, as these are pure Boolean functions.

The fact that assertions can be expressed in Ada is purely
accidental; SPARK shows that a different language can be used,
and may even be more expressive.  Comments do count, too,
such as RM statements about O(op). The latter can be understood
as a part of contract between any user of Ada and the
implementer of Ada.

An improper assertion (if I may call it that for reasons
of delineation) will modify that part of program's data
which is covered by the contract, data to be handled solely
by the program which would yield the same effects that
are stated in the contract, with or without assertion checking.
So, using improper assertions, you'd be making a mess,
even though results might come out right (only deferring
proof obligations having to do with the improper assertion).

OTOH, whenever testing an assertion requires computation,
it is essential to keep its doings separate from what the
program needs to compute to fulfill the contract.

So, the idea of considering assertions of contracts (as opposed
to plain old debugging asserts) from the viewpoint of their
implementation is misleading.

Illustration:

Company X agrees this is in a contract:

   If, before calling Binary_Search, input is sorted,
   then the result of calling Binary_Search will be ...

That's a statement that can be made part of a contract, and it
may be formally reflected in Ada aspects, if possible.
It is expressing the idea that Source is sorted.

Last but not least, a precondition should never be anything
but an assumption. As not checking it at run-time is a valid
way of handling preconditions, the outcome of not tesing should
still not create havoc; so, the caller needs to make sure
that assertion of the contract is always true, and never
depend on how it is tested, or on that it is tested.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24 12:10               ` G.B.
@ 2015-04-24 14:40                 ` Jacob Sparre Andersen
  2015-04-24 16:29                   ` G.B.
  2015-04-24 23:46                   ` Randy Brukardt
  0 siblings, 2 replies; 107+ messages in thread
From: Jacob Sparre Andersen @ 2015-04-24 14:40 UTC (permalink / raw)


"G.B." <bauhaus@futureapps.invalid> wrote:

[...]

> Last but not least, a precondition should never be anything but an
> assumption. As not checking it at run-time is a valid way of handling
> preconditions, the outcome of not tesing should still not create
> havoc; so, the caller needs to make sure that assertion of the
> contract is always true, and never depend on how it is tested, or on
> that it is tested.

Taking that view, there isn't any point combining the contract notation
of Ada 2012 and SPARK 2014, as it would prevent you from writing a
single source text which was valid for both languages.

SPARK would complain about your in-subprogram check (mirroring the
precondition), as raising an exception is illegal in SPARK (and dead
code probably is as well).

Greetings,

Jacob
--
"[...] *transfer* a bit of salary from the person who writes
 a bug to the person that finds a bug..." -- Keith Ray

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24 14:40                 ` Jacob Sparre Andersen
@ 2015-04-24 16:29                   ` G.B.
  2015-04-24 23:46                   ` Randy Brukardt
  1 sibling, 0 replies; 107+ messages in thread
From: G.B. @ 2015-04-24 16:29 UTC (permalink / raw)


On 24.04.15 16:40, Jacob Sparre Andersen wrote:
> "G.B." <bauhaus@futureapps.invalid> wrote:
>
> [...]
>
>> Last but not least, a precondition should never be anything but an
>> assumption. As not checking it at run-time is a valid way of handling
>> preconditions, the outcome of not tesing should still not create
>> havoc; so, the caller needs to make sure that assertion of the
>> contract is always true, and never depend on how it is tested, or on
>> that it is tested.
>
> Taking that view, there isn't any point combining the contract notation
> of Ada 2012 and SPARK 2014, as it would prevent you from writing a
> single source text which was valid for both languages.

That's hardly possible anyway, given the number of restrictions
that SPARK 2014 imposes, even when not taking assertions into account.
Is
   --# hide
all gone?

Also, why would there be a technical need to have a contract use
only Ada or SPARK 2014 as the single language? That's not done
in the LRM, which uses logic and mathematics; writing SPARK 2014,
non-SPARK compilers could simply omit analysis of the language used
in Pre => ... etc., presuming they can handle the syntax
(likely, I should think). And some things cannot reasonably be
stated formally anyway.  (I guess it becomes apparent that
maybe a combined language turns into a combined stricture! ;-)

The makers of, respectively, GNAT and SPARK have merged, that seems
like a start for better merging the languages; Tucker Taft,
now also at AdaCore, has alluded to excessive restrictions
in SPARK. The definitions of SPARK had added more of Ada over the
years already (tasks, tagged types, ...).

So, I guess, in the long run, there is no more risk of
Ada programs that suffer from conflicting language desires than
there is now when source texts show non-Ada 'Img and GNAT is
not your Ada compiler. ;-)


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2014-12-22 23:32 ` Randy Brukardt
@ 2015-04-24 17:59   ` Shark8
  0 siblings, 0 replies; 107+ messages in thread
From: Shark8 @ 2015-04-24 17:59 UTC (permalink / raw)


On Monday, December 22, 2014 at 4:32:22 PM UTC-7, Randy Brukardt wrote:
> "Jean François Martinez" wrote in message 
> >First time I read about Pre-Post conditions in Ada 2012 I felt some 
> >disconfort about
> >functions invoked in pre/post conditions being allowed to have side 
> >effects.  Tus it is
> >possible to have programs that work when checks are enabled and break when 
> >they
> >are disabled that because the side effets "make them work".   I don't like 
> >"solutions"
> >like "the compiler will allow you to soot yourself in the foot so be 
> >careful".  That is C
> >not Ada
> >
> >Perhaps it would have been a good idea to have a No_Side_Effects aspect and 
> >only
> >functions labelled with this pragma would be allowed in pre/post 
> >conditions.
> 
> This was proposed with the original proposal. It was eventually dropped 
> because it is too hard to define what a side-effect actually is (easy 
> definitions are too fierce).
> 
> [SNIP]
> 
> To summarize, we spent a lot of time agonizing about this particular issue, 
> wording solutions, and the like. It's most certainly not the case that it 
> wasn't considered, but more that we felt that the alternative of strong 
> restrictions would be barely usable.
> 
>                                Randy Brukardt, ARG Editor.

I too don't like the possibility of letting the side-effects of aspects impact program correctness, but you do present some good reasoning (the difficulty of defining those, and the immaturity of static-analysis on that point [if it's not defined, you can't check it]). -- All that said, thank you, and the rest of the ARG, for your work on Ada in-general, and this issue in particular.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24  8:59             ` Jacob Sparre Andersen
  2015-04-24  9:18               ` J-P. Rosen
  2015-04-24 12:10               ` G.B.
@ 2015-04-24 22:26               ` Peter Chapin
  2015-04-25  0:13                 ` Randy Brukardt
  2015-04-25  5:51                 ` Dmitry A. Kazakov
  2015-04-25  0:31               ` Bob Duff
  3 siblings, 2 replies; 107+ messages in thread
From: Peter Chapin @ 2015-04-24 22:26 UTC (permalink / raw)


On Fri, 24 Apr 2015, Jacob Sparre Andersen wrote:

> But when are you putting "essential program logic" in an assertion?

I think a servicable rule is this: If the program works as required, with 
all necessary checks still present, with all assertions removed, then we 
can say the assertions contain no essential program logic.

In a correct program all assertions should always be true.

> 1) subtype Non_Negative_Matrix is Ada.Numerics.Real_Arrays.Real_Matrix
>     with Dynamic_Predicate
>            => (Non_Negative_Matrix'First (1) = 1) and
>               (Non_Negative_Matrix'First (2) = 1) and
>               (for all E of Non_Negative_Matrix => E >= 0.0);

Although the Dynamic_Predicate asserts that the matrix elements are all 
non-negative, this does not remove the program's obligation to include 
checks that no negative elements are added to the matrix. The assertion 
only exists to catch mistakes in those checks. It does not exist to 
actually *be* those checks. In that respect the assertion is not 
"essential program logic."

> 2) procedure New_Line (File : in File_Type)
>     with Pre => Is_Open (File) and then
>                 Mode (File) in (Out_File | Append_File);

Similarly here the program is still obligated to only pass File objects to 
New_Line that represent open files. If the program accidentally passes an 
unopened file to New_Line the assertion will catch the logical error. 
However, the assertion should not take the place of earlier checks. Again 
the assertion is not essential program logic.

> 3) function Binary_Search (Source : in List;
>                           Key    : in Keys) return Values
>     with Pre => Sort (Source); --  Sorts Source if it isn't already sorted.

Yes, definitely wrong. On the other hand using something like

     with Pre => Is_Sorted(Source);

could be reasonable.

> I consider examples (1) and (2) fine, but example (3) a very bad idea.

I agree that (1) and (2) are fine, but that doesn't mean the program 
should rely on the assertions for its proper functioning. The assertions 
check correctness; they don't implement it. Even if the assertions are 
removed, the program should still execute properly.

> But I dislike banning "essential program logic" in assertions, as any 
> assertion is program logic.  And if it isn't essential, why should it be 
> there?

Because we often make mistakes and it's nice to have our thinking double 
checked. Also, of course, the assertions make our intentions known to 
tools, such as SPARK, that can automatically verify our code implements 
the conditions we are asserting.

> One problem I have with assertion aspects is that I get the same 
> exception no matter which mistake I have made.  If I put the check 
> inside a subprogram instead of in its profile, I can get more clear 
> information about which kinds of mistakes I make.

Putting the check inside the subprogram is quite a different thing. That 
is part of your implementation of correctness. Since assertions should 
never fail, using the same exception for all of them isn't terrible. That 
said, the upcoming feature that allows different exceptions to be used 
when an assertion fails is nice too.

Peter



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24  9:18               ` J-P. Rosen
@ 2015-04-24 23:39                 ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-04-24 23:39 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 829 bytes --]

"J-P. Rosen" <rosen@adalog.fr> wrote in message 
news:mhd1ne$miv$1@dont-email.me...
> Le 24/04/2015 10:59, Jacob Sparre Andersen a écrit :
>> One problem I have with assertion aspects is that I get the same
>> exception no matter which mistake I have made.  If I put the check
>> inside a subprogram instead of in its profile, I can get more clear
>> information about which kinds of mistakes I make.
> Your wishes will be soon satisfied, see AI12-0022-1 and AI12-0054-2
> (raise expression and aspect Predicate_Failure)

Raise expressions have been implemented in GNAT for quite a while; they 
probably exist in the compiler you're using. Predicate_Failure wasn't 
implemented until very recently (after someone wrote an ACATS test for it), 
so for that you might have to wait.

                                        Randy.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24 14:40                 ` Jacob Sparre Andersen
  2015-04-24 16:29                   ` G.B.
@ 2015-04-24 23:46                   ` Randy Brukardt
  1 sibling, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-04-24 23:46 UTC (permalink / raw)


"Jacob Sparre Andersen" <jacob@jacob-sparre.dk> wrote in message 
news:87oamdlhd3.fsf@adaheads.sparre-andersen.dk...
> "G.B." <bauhaus@futureapps.invalid> wrote:
>
> [...]
>
>> Last but not least, a precondition should never be anything but an
>> assumption. As not checking it at run-time is a valid way of handling
>> preconditions, the outcome of not tesing should still not create
>> havoc; so, the caller needs to make sure that assertion of the
>> contract is always true, and never depend on how it is tested, or on
>> that it is tested.
>
> Taking that view, there isn't any point combining the contract notation
> of Ada 2012 and SPARK 2014, as it would prevent you from writing a
> single source text which was valid for both languages.
>
> SPARK would complain about your in-subprogram check (mirroring the
> precondition), as raising an exception is illegal in SPARK (and dead
> code probably is as well).

Besides, Ada 2012 has a mechanism to ensure that preconditions are in fact 
evaluated; that exists for this very reason. If you have a local 
Assertion_Policy of Check, that applies to the declarations and the 
precondition ought to be evaluated no matter what policy is in effect at the 
point of the call. (Whether GNAT actually gets this right is unknown.)

Otherwise, one could not "hoist" the various rules for I/O, containers, and 
the like into preconditions. Which would seem like madness. Certainly 
checking the same thing twice (which is what would happen if you put the 
condition into the precondition and then manually checked it a second time 
in the body) is madness.

There is a camp that thinks that ignoring contract assertions is very 
similar to suppressing checks, and anything that happens after doing that is 
effectively erroneous. (That's NOT the wording in the Standard.) For that 
group, hoisting things into preconditions is fine. Otherwise, one needs to 
take steps to ensure that they're evaluated.

                                           Randy.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24 22:26               ` Peter Chapin
@ 2015-04-25  0:13                 ` Randy Brukardt
  2015-04-25  1:01                   ` Peter Chapin
  2015-04-25  5:51                 ` Dmitry A. Kazakov
  1 sibling, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-04-25  0:13 UTC (permalink / raw)


"Peter Chapin" <PChapin@vtc.vsc.edu> wrote in message 
news:alpine.CYG.2.11.1504241811360.5268@WIL414CHAPIN.vtc.vsc.edu...
...
> In a correct program all assertions should always be true.

Sure, but that applies to lots of other things, too. For instance, in a 
correct program, Constraint_Error or Program_Error should not be raised. But 
it still happens.

>> 1) subtype Non_Negative_Matrix is Ada.Numerics.Real_Arrays.Real_Matrix
>>     with Dynamic_Predicate
>>            => (Non_Negative_Matrix'First (1) = 1) and
>>               (Non_Negative_Matrix'First (2) = 1) and
>>               (for all E of Non_Negative_Matrix => E >= 0.0);
>
> Although the Dynamic_Predicate asserts that the matrix elements are all 
> non-negative, this does not remove the program's obligation to include 
> checks that no negative elements are added to the matrix. The assertion 
> only exists to catch mistakes in those checks. It does not exist to 
> actually *be* those checks. In that respect the assertion is not 
> "essential program logic."

I disagree with this (see below).

>> 2) procedure New_Line (File : in File_Type)
>>     with Pre => Is_Open (File) and then
>>                 Mode (File) in (Out_File | Append_File);
>
> Similarly here the program is still obligated to only pass File objects to 
> New_Line that represent open files. If the program accidentally passes an 
> unopened file to New_Line the assertion will catch the logical error. 
> However, the assertion should not take the place of earlier checks. Again 
> the assertion is not essential program logic.

I definitely disagree here. This example is essentially similar to the one 
given in the upcoming Corrigendum (3.2.4(41-51/4). In a case like this, the 
precondition (or predicates as in the example) *replace* the checks required 
by English text in the RM. There would no internal checks of correctness.

You are of course correct that no caller should call New_Line with a closed 
file, but that's irrelevant because it can happen anyway (there is no static 
way to prevent it). There has to be code somewhere to handle it. So, in such 
a case, a precondition serves two purposes: (1) to signal to the client what 
conditions are expected, and (2) to determine what happens if those 
conditions aren't met. (2) certainly is "essential program logic", at so far 
as one cannot meet the published specification of New_Line without it.

Ada prior to Ada 2012 has a problem in that the reasons an exception can be 
raised conflate the programmer mistakes with conditions that are impossible 
for the programmer to know (consider the difference between whether a file 
object is open vs. whether a file exists on the disk). Preconditions and 
predicates provide a way to separately specify the first kind of situation 
vs. the second kind. (Ultimately, one hopes, compilers will be able to 
eliminate much of the runtime checking associated with preconditions and 
predicates, which is not possible in the pre-Ada 2012 world.)

                                   Randy.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24  8:59             ` Jacob Sparre Andersen
                                 ` (2 preceding siblings ...)
  2015-04-24 22:26               ` Peter Chapin
@ 2015-04-25  0:31               ` Bob Duff
  2015-04-25 12:08                 ` vincent.diemunsch
  3 siblings, 1 reply; 107+ messages in thread
From: Bob Duff @ 2015-04-25  0:31 UTC (permalink / raw)


Jacob Sparre Andersen <jacob@jacob-sparre.dk> writes:

> Robert A Duff <bobduff@shell01.TheWorld.com> writes:
>> Peter Chapin <PChapin@vtc.vsc.edu> writes:
>
>>> ... Thus putting anything resembling essential program logic in an
>>> assertion is, of course, just wrong.
>>
>> Yes.
>
> But when are you putting "essential program logic" in an assertion?

I think what Peter meant by "essential program logic" is code that,
if deleted from the program, would cause the program to malfunction.

> 1) subtype Non_Negative_Matrix is Ada.Numerics.Real_Arrays.Real_Matrix
>      with Dynamic_Predicate
>             => (Non_Negative_Matrix'First (1) = 1) and
>                (Non_Negative_Matrix'First (2) = 1) and
>                (for all E of Non_Negative_Matrix => E >= 0.0);
>
> 2) procedure New_Line (File : in File_Type)
>      with Pre => Is_Open (File) and then
>                  Mode (File) in (Out_File | Append_File);

The assertions in (1) and (2) are not "essential program logic"; if you
delete them, the program will still work properly.  That's fine -- you
should write assertions so that deleting them from a correct program
will have no effect.

> 3) function Binary_Search (Source : in List;
>                            Key    : in Keys) return Values
>      with Pre => Sort (Source); --  Sorts Source if it isn't already sorted.

That's illegal.  (I assume Sort is a procedure with an 'in out'
parameter that sorts in place.  Passing Source (an 'in') parameter is
illegal.)  I suppose you could make Source 'in out', but as you say
below that's a very bad idea.

> I consider examples (1) and (2) fine, but example (3) a very bad idea.

Agreed.  (3) is trying to put "essential program logic" in an assertion,
which is a bad idea.

> At the same time, I know that my application may fail silently if the
> assertion in example (1) isn't true.
>
> When it comes to example (2), I expect that the operating system (if
> nothing else) will make sure that my application doesn't fail silently
> if the assertion isn't true.
>
> But I dislike banning "essential program logic" in assertions, as any
> assertion is program logic.  And if it isn't essential, why should it be
> there?

Same reason we put comments in the code.  Comments are not "essential
program logic" in the sense defined above -- if you delete all the
comments, the program will still work.  But we still want comments.
Likewise, one should normally write assertions (like Pre and Predicate)
so the program still works if they are deleted.

Assertions are like comments, except we have a higher confidence
that they are actually true.

> One problem I have with assertion aspects is that I get the same
> exception no matter which mistake I have made.  If I put the check
> inside a subprogram instead of in its profile, I can get more clear
> information about which kinds of mistakes I make.

You can say:

    Pre => X = Y or else raise X_Not_Equal_To_Y;

- Bob

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-25  0:13                 ` Randy Brukardt
@ 2015-04-25  1:01                   ` Peter Chapin
  0 siblings, 0 replies; 107+ messages in thread
From: Peter Chapin @ 2015-04-25  1:01 UTC (permalink / raw)


On Fri, 24 Apr 2015, Randy Brukardt wrote:

>>> 2) procedure New_Line (File : in File_Type)
>>>     with Pre => Is_Open (File) and then
>>>                 Mode (File) in (Out_File | Append_File);
>>
>> Similarly here the program is still obligated to only pass File objects to
>> New_Line that represent open files. If the program accidentally passes an
>> unopened file to New_Line the assertion will catch the logical error.
>> However, the assertion should not take the place of earlier checks. Again
>> the assertion is not essential program logic.
>
> I definitely disagree here. This example is essentially similar to the one
> given in the upcoming Corrigendum (3.2.4(41-51/4). In a case like this, the
> precondition (or predicates as in the example) *replace* the checks required
> by English text in the RM. There would no internal checks of correctness.

In this case it's not the internal checks I mean. Hoisting the internal 
checks into preconditions makes sense to me, at least in certain (many?) 
cases. In my comments above I'm talking about checks occurring before 
New_Line is called.

Somewhere the programmer tried to open a file. If the programmer attempts 
to call New_Line without first verifying that the file opened 
successfully, that's a logical error in the program. Checking that 
Is_Open(File) is true provides some protection against such an error... 
regardless of if the check is a precondition or done inside the body of 
New_Line. Either way, in a correct program that check should never fail. 
The beauty of doing it in a precondition is that the "unnecessary" check 
can be removed by changing the assertion policy.

In contrast imagine a procedure that takes file and does some processing 
on it. Suppose the procedure raises some exception if the file has the 
wrong format. The programmer might decide that it's not wrong to call the 
procedure with an incorrectly formatted file, and let that be a matter for 
the procedure to worry about. In that case, adding the check as a 
precondition doesn't seem right; a correct program might call the 
procedure with a badly formatted file.

On the other hand if the programmer decides it's illogical to call the 
procedure with an incorrectly formatted file because the file has 
(supposedly) been verified previously, using a precondition to check the 
format makes sense.

Same procedure, same check... the sensibility of making the check a 
precondition depends on the context in which the procedure is used. In the 
first case the caller relies on the procedure to do the check. In the 
second case the procedure relies on the caller to do it. Ultimately it 
ends up being a design decision.

Peter

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-24 22:26               ` Peter Chapin
  2015-04-25  0:13                 ` Randy Brukardt
@ 2015-04-25  5:51                 ` Dmitry A. Kazakov
  1 sibling, 0 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-04-25  5:51 UTC (permalink / raw)


On Fri, 24 Apr 2015 18:26:41 -0400, Peter Chapin wrote:

> On Fri, 24 Apr 2015, Jacob Sparre Andersen wrote:
> 
>> But when are you putting "essential program logic" in an assertion?
> 
> I think a servicable rule is this: If the program works as required, with 
> all necessary checks still present, with all assertions removed, then we 
> can say the assertions contain no essential program logic.

Where "as required" is defined in part by the assertions, of course ...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-25  0:31               ` Bob Duff
@ 2015-04-25 12:08                 ` vincent.diemunsch
  2015-04-25 16:37                   ` Georg Bauhaus
  2015-05-06 21:07                   ` Randy Brukardt
  0 siblings, 2 replies; 107+ messages in thread
From: vincent.diemunsch @ 2015-04-25 12:08 UTC (permalink / raw)


Le samedi 25 avril 2015 02:31:57 UTC+2, Bob Duff a écrit :

> Same reason we put comments in the code.  Comments are not "essential
> program logic" in the sense defined above -- if you delete all the
> comments, the program will still work.  But we still want comments.
> Likewise, one should normally write assertions (like Pre and Predicate)
> so the program still works if they are deleted.
> 
> Assertions are like comments, except we have a higher confidence
> that they are actually true.
> 

I agree. Assertions express logical properties of the program. One can have a high
confidence in them for two reasons :
1. mathematical correctness according to a given theory
2. proof that the code is coherent with the assertion, using a tool.

But Assertions should stay as comments, for they are not code but logical formula expressed
in a mathematical langage. They were comments in Spark 05 and it is still the case in Frama-C
or many formal proof systems. Hoare logic is supposed to give "correctness by construction" : 
which is the ability to never fail on a runtime test. This is required in safety critical systems.

But what Ada and SPARK 2014 are doing is "design by contract", as Bertrand Meyer 
called it. This makes a confusion between a precondition and a runtime test. It may
look appealing in the beginning but it is nothing else than a test harness put around
a subprogram. With all the problems related to it : how to debug it ? Should it raise
exceptions ? It breaks the separation between specification and implementation etc.
Therefore, I don't think that it is the right choice for a langage that is mainly used in
safety critical systems.

https://groups.google.com/forum/#!searchin/comp.lang.ada/future$20of$20spark/comp.lang.ada/klr0i37j0uk/5IeGMdlh7PYJ

Kind regards,

Vincent


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-25 12:08                 ` vincent.diemunsch
@ 2015-04-25 16:37                   ` Georg Bauhaus
  2015-05-06 21:07                   ` Randy Brukardt
  1 sibling, 0 replies; 107+ messages in thread
From: Georg Bauhaus @ 2015-04-25 16:37 UTC (permalink / raw)


On 25.04.15 14:08, vincent.diemunsch@gmail.com wrote:
> But what Ada and SPARK 2014 are doing is "design by contract", as Bertrand Meyer
> called it. This makes a confusion between a precondition and a runtime test.

Actually, Meyer insists that contracts have an associated
notion, that of proof obligation. And the description of DbC
is declaring quite openly that not needing to perform run-time
tests (e.g. defensive programming) is a design goal.

What you get in the lesser (than some fancy ideal) situation is
summarized in a table of OOSC2, §11.6 (coincidence? ;-), for a stack:

   Put        OBLIGATIONS          BENEFITS

Client    satisfy Pre=>...        from Post=>...

          Only call Put(X) on     Get stack updated: not
          a non-full stack.       Empty, X on top, Item yields
                                  X, Count increased by 1.

Supplier  satisfy Post=>...        from Pre=>...

          Update stack reprsntn   Simpler processing thanks
          to have X on top (Item  to the assumption that
          yields X), Count        stack is not full.
          increased by 1, not
          Empty.


Looking at SPARK 2014, it seems to not have changed earlier SPARK
WRT being a tool for analysis before run-time.

Ada, OTOH, looks like becoming a programming language facilitating
either type of checking, as before, but more extensively and more
formally, and more of it to write for the programmer.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-04-25 12:08                 ` vincent.diemunsch
  2015-04-25 16:37                   ` Georg Bauhaus
@ 2015-05-06 21:07                   ` Randy Brukardt
  2015-05-06 22:10                     ` Paul Rubin
                                       ` (2 more replies)
  1 sibling, 3 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-06 21:07 UTC (permalink / raw)


<vincent.diemunsch@gmail.com> wrote in message 
news:ce4af31e-d491-479f-a5b3-0b3eedea2a68@googlegroups.com...
...
>But what Ada and SPARK 2014 are doing is "design by contract", as
> Bertrand Meyer called it. This makes a confusion between a precondition
> and a runtime test.
...
>Therefore, I don't think that it is the right choice for a langage that is 
>mainly
>used in safety critical systems.

If that's all Ada is going to be used for, it's completely irrelevant what 
features it has. I would never have used Ada personally if that was the 
case. Ada (IMHO) is a language to write (more) correct programs, no matter 
what kind of programs you write.

The basic idea behind the preconditions in Ada 2012 is to give a way for 
people (and implementations and tools as well) to ease into using additional 
checking and proof. Almost no one is willing to submit to the horrors of 
complete description of entities as required by SPARK. But (almost?) 
everyone using Ada would be interested in improving the correctness of their 
programs, one assertion at a time.

After all, consider how you likely learned the value of ranges and strong 
types and subtypes. Most everyone started out using mostly type Integer for 
everything. But one quickly notices that those cases where separate types 
were used get more errors detected at compile time and at runtime --  
eliminating what usually are long and painful debugging sessions. This 
positive feedback loop quickly turns most Ada programmers into advocates and 
heavy users of the simple tools available in Ada.

By extending those mechanisms (via predicates and preconditions) to 
arbitrary expressions, we allow much more such error detection to occur.

There's also a performance benefit. In my experience, 10-20% of Ada code is 
checking code correctness (such is that a container routine is not passeed a 
null cursor). By making these sort of rules preconditions or (better) 
predicates, we increase the chances that errors are detected immediately so 
no debugging is needed.

I don't believe that checks in Ada (of any kind) should ever be turned off. 
It's much better to let the compiler eliminate those that aren't needed 
(which is most of them, if your program is written correctly). The same will 
happen for predicates and preconditions and so on.

I also don't believe in separate proof tools. That's something that should 
be a basic part of the compiler (it has to be to do optimization, check 
elimination, and the like anyway). The difficult question is how to feed 
information about those things (particular checks known to fail) back to the 
programmer (as optimization phases tend to run without messages, and the 
messages that they do give are rather non-specific). In order for proof to 
be part of the compiler, the proof language has to be part of the language.

Lastly, fancy proof languages tend to be beyond the skill level of ordinary 
(and some not so ordinary) programmers. Despite, 6 years of University 
education, a masters degree, and 30 years of real-world experience, I had to 
have both the meaning of the implication operator and a "universally 
quantified predicate" explained to me. As it turns out, I had run into both 
in the past, but not under those names. Moreover, the programming language 
semantics is already complete -- why invent a new syntax just to confuse 
people? (I know from the early days when the compiler was written in Pascal 
how hard it is to switch between two similar languages used in similar 
contexts. That would be a permanent rather than transient problem.)

As I noted before, Ada (prior to 2012) uses exceptions to describe both 
requirements on callers and error conditions in a routine. It's much better 
to separate these, because the former can be eliminated by proof techniques 
and the latter cannot (no proof technique can tell you that a file will 
exist when it is initially opened). Ada 2012 preconditions allow one to do 
this without having to change the defined semantics of a routine (meaning 
that they can be profitably used on existing code).

The idea that proof has any value by itself is the real problem here. At 
best it is a tool to reduce the needed checking in a program, and a way to 
detect problems at compile-time (in this later use, it's only really of 
value as part of the compiler -- most programmers will not screw around with 
extra tools that have to be configured and managed and slow down the 
development process even more). Once you over-rely on proof, all you've done 
is forced your code into a new kind of specification, one that will have at 
least as many errors as the original. There's little value in that 
(especially in larger programs).

Anyway, more than enough ranting on this topic. IMHO, Ada 2012 gets it 
right, and building SPARK on top of it makes it more accessible to more 
programmers. That seems like a good idea, even if SPARK itself remains 
misguided.

                                       Randy.





^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-06 21:07                   ` Randy Brukardt
@ 2015-05-06 22:10                     ` Paul Rubin
  2015-05-07  9:01                     ` Georg Bauhaus
  2015-05-07 10:06                     ` Stefan.Lucks
  2 siblings, 0 replies; 107+ messages in thread
From: Paul Rubin @ 2015-05-06 22:10 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> Lastly, fancy proof languages tend to be beyond the skill level of ordinary 
> (and some not so ordinary) programmers. Despite, 6 years of University 
> education, a masters degree, and 30 years of real-world experience, I had to 
> have both the meaning of the implication operator and a "universally 
> quantified predicate" explained to me. As it turns out, I had run into both 
> in the past, but not under those names. 

I'd say there's two rather different approaches to verified software:
the approach based on Hoare logic (the precondition/postcondition stuff
as used in SPARK); and the approach based on dependent types, as used in
Coq.  Dependent types are the idea that you can have a typed lambda
calculus where the types are arbitrary predicates including quantified
ones involving other terms.  So if you have a term of a given type, that
constitutes a constructive proof of a value of that type.  By
typechecking the term and then compiling it into code, you have a
verified program that meets the specification expressed in the type.

The two approaches came out of different traditions so they tend to use
differing vocabulary (the stuff about quantified predicates is from
mathematical logic) but it's not that big a deal.  It sounds like you're
more familiar with the SPARK approach than the Coq approach, but I think
it's good to have some knowledge of both.

Note that dependent types can be arbitrary complicated, e.g. a Coq type
can express the execution semantics of a chunk of program code.  If you
can take a chunk of C code, assign semantics to it and label them with a
type, and transmogrify the C code until it's a chunk of assembly code
whose semantics have the same type as the C code as validated by a
typechecker, then you have a verified compiler.  This is basically how
CompCert (compcert.inria.fr) works.  The typechecker verifies that the
compiler output has preserved the meaning of the input source code.

You might like the online book "Software Foundations", which explains
writing verified code in Coq:

http://www.cis.upenn.edu/~bcpierce/sf/current/index.html

CompCert is one of Coq's success stories so Coq should be of
considerable interest to people interested in formal verification.

The book looks pretty accessible and working through it has been on my
infinite TODO list for quite a while.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-06 21:07                   ` Randy Brukardt
  2015-05-06 22:10                     ` Paul Rubin
@ 2015-05-07  9:01                     ` Georg Bauhaus
  2015-05-07  9:12                       ` Dmitry A. Kazakov
  2015-05-07 10:06                     ` Stefan.Lucks
  2 siblings, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-07  9:01 UTC (permalink / raw)


On 06.05.15 23:07, Randy Brukardt wrote:
> In order for proof to
> be part of the compiler, the proof language has to be part of the language.

If the proof language is understood by the compiler, this does not
require it to be compilable Ada, or efficiently computable (and
terminating in time), I think?

While a conjecture, say, can be rephrased as an Ada function,
the function may not be known to be got by a terminating computation.
Now what if a program's proof sill relies on this conjecture?

 From this point of view, proof things may either mean more than Ada
can express, or compute. That's not always handled by a global
assertion_policy.
Right now, the proof language (i.e. GNAT-SPARK) is using language extensions
that blend with Ada syntax. Earlier proof languages (such as e.g. SPARK)
used comments, and also understood (some) Ada.

That's not unusual: there are languages that have languages embedded
in string literals or in quoted expressions, used when formatting
or when interpolating or EVAL-ing, etc.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07  9:01                     ` Georg Bauhaus
@ 2015-05-07  9:12                       ` Dmitry A. Kazakov
  2015-05-07  9:29                         ` Georg Bauhaus
  2015-05-07 18:32                         ` Randy Brukardt
  0 siblings, 2 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-07  9:12 UTC (permalink / raw)


On Thu, 07 May 2015 11:01:26 +0200, Georg Bauhaus wrote:

> On 06.05.15 23:07, Randy Brukardt wrote:
>> In order for proof to
>> be part of the compiler, the proof language has to be part of the language.
> 
> If the proof language is understood by the compiler, this does not
> require it to be compilable Ada, or efficiently computable (and
> terminating in time), I think?

A compiler compiles, evidently...

The fallacy of Randy's conclusion is different. A compiler need not to be
the compiler from single language. The proof language is another language.
It cannot be same language because that is inconsistent. Yet it can and
must be compilable. It even can produce executable code, so long the
proof's code and the program's code are separate.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07  9:12                       ` Dmitry A. Kazakov
@ 2015-05-07  9:29                         ` Georg Bauhaus
  2015-05-07  9:31                           ` Georg Bauhaus
  2015-05-07 18:32                         ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-07  9:29 UTC (permalink / raw)


On 07.05.15 11:12, Dmitry A. Kazakov wrote:
> The fallacy of Randy's conclusion is different. A compiler need not to be
> the compiler from single language.

Yes, you are right.

Maybe the word "compiler" is a misnomer in this context due to
some misleading connotations.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07  9:29                         ` Georg Bauhaus
@ 2015-05-07  9:31                           ` Georg Bauhaus
  0 siblings, 0 replies; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-07  9:31 UTC (permalink / raw)


On 07.05.15 11:29, Georg Bauhaus wrote:
> Maybe the word "compiler" is a misnomer in this context due to
> some misleading connotations.

OTOH, RR Software's compiler is perfectly named for this purpose!

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-06 21:07                   ` Randy Brukardt
  2015-05-06 22:10                     ` Paul Rubin
  2015-05-07  9:01                     ` Georg Bauhaus
@ 2015-05-07 10:06                     ` Stefan.Lucks
  2015-05-07 12:16                       ` Dmitry A. Kazakov
  2015-05-07 18:52                       ` Randy Brukardt
  2 siblings, 2 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-07 10:06 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 4934 bytes --]

On Wed, 6 May 2015, Randy Brukardt wrote:

> If that's all Ada is going to be used for, it's completely irrelevant what
> features it has. I would never have used Ada personally if that was the
> case. Ada (IMHO) is a language to write (more) correct programs, no matter
> what kind of programs you write.

Good point!

> The basic idea behind the preconditions in Ada 2012 is to give a way for
> people (and implementations and tools as well) to ease into using additional
> checking and proof.

Actually, the first thing is to ease writing and maintaining proper 
specifications, disregarding either run-time checks or proofs. One could 
see them as enhanced comments. Quite often I have seen something like

   function Divide(Dividend, Divisor: Numeric_Type) return Numeric_Type;
   -- requires Y /= 0.0

This is a simplistic example, where one can easily guess that someone has 
changed the name of the parameter "Y" into "Divisor". But in reality, this 
is not always so obvious.

> Almost no one is willing to submit to the horrors of complete 
> description of entities as required by SPARK.

Actually, you don't need *complete* descriptions for SPARK. Often, 
verifying incomplete descriptions can be useful. Of course, the static 
verification will only verify the properties you describe.

> But (almost?) everyone using Ada would be interested in improving the 
> correctness of their programs, one assertion at a time.

That is the point.

> By extending those mechanisms (via predicates and preconditions) to
> arbitrary expressions, we allow much more such error detection to occur.

Agreed!

> I don't believe that checks in Ada (of any kind) should ever be turned off.

Here, I heavily disagree. Often, checking relevant properties is much to 
expensive to perform the checks them in production code.

A simple example is binary search over a sorted array. The precondition 
requires the array to be sorted. If the compiler succeeds in optimising 
the test away, it is equivalent to a static program verifier proving the 
precondition holds when the binary search is called. If the compiler fails 
to optimise the check away, the execution time goes up from logarithmic to 
linear. If you can live with that, you don't need binary search!

Actually, one thing I am missing from Ada 2012 is a convenient and 
fine-grained way to tell the compiler which pre- and postconditions and 
invariants are to be checked, and which checks are to be skipped.

Most urgently, I would expect an option to skip checking ordinary pre- and 
postconditions, without skipping those that explicitly raise some 
exceptions. The point is, these two forms of precondition are semantically 
totally different:

The semantic of a plain precondition is essentially:

   * If I a True, the postcondition will hold.
     Never call the subprogram if I am false!!!
     Otherwise, the subprogram might do anything.

This is a precondition in the "Design by Contract" sense.

A typical example for this would be "the array is sorted" when calling 
binary search. If you call a binary search routine with an unsorted array,
you will likely get false results. And you deserve the blame!

The semantic of a precondition with something like "else raise 
This_Exception" is

   * If I am True, the postcondition will hold
     when the subprogram terminates.
     If I am False, the subprogram will not do anything,
     except for raising This_Exception.

A typical example for this kind of precondition would be "the file exists" 
when trying to open a file. (BTW, I would prefer a phrase different from 
"precondition" for this kind of thing, but that appears to be too late for 
Ada, now.)

Maybe, Ada 202X could include something like

   with Pre => ... -- plain precondition, can be turned off
        Pre'Check => ... -- must be checked at run time

> I also don't believe in separate proof tools. That's something that should
> be a basic part of the compiler (it has to be to do optimization, check
> elimination, and the like anyway). The difficult question is how to feed
> information about those things (particular checks known to fail) back to the
> programmer (as optimization phases tend to run without messages, and the
> messages that they do give are rather non-specific). In order for proof to
> be part of the compiler, the proof language has to be part of the language.

Did you try out SPARK 2014? The proof language is part of the language, 
and with a proper usage of gnat project files, calling the prover becomes 
as much part of the compilation process as calling the syntax checker.

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 10:06                     ` Stefan.Lucks
@ 2015-05-07 12:16                       ` Dmitry A. Kazakov
  2015-05-07 18:00                         ` Stefan.Lucks
  2015-05-07 18:52                       ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-07 12:16 UTC (permalink / raw)


On Thu, 7 May 2015 12:06:29 +0200, Stefan.Lucks@uni-weimar.de wrote:

> On Wed, 6 May 2015, Randy Brukardt wrote:
> 
>> I don't believe that checks in Ada (of any kind) should ever be turned off.
> 
> Here, I heavily disagree. Often, checking relevant properties is much to 
> expensive to perform the checks them in production code.

Merely a confusion of checks with code. There is no such thing as runtime
correctness check, there is only run time code. You can neither remove
functional code nor spoil your program with non-functional code, especially
with one that would break the behavior.

> A simple example is binary search over a sorted array. The precondition 
> requires the array to be sorted.

And what does it mean for the behavior? If unsorted input is *valid* and
*must* raise exception (contracted behavior) then you cannot remove code
*implementing* this behavior.

Consider this client program:

declare
   X : Element;
begin
   X := Search (Data, Key);
   begin
       null;
   exception
       when Not_Sorted_Error =>
          X := Search (Sort (Data), Key);
   end;
end;

Is this code correct?

> Actually, one thing I am missing from Ada 2012 is a convenient and 
> fine-grained way to tell the compiler which pre- and postconditions and 
> invariants are to be checked, and which checks are to be skipped.

There shall be no dynamic predicates at all. They are inherently
inconsistent. No program shall evaluate correctness of its own. No
executable code shall be put into declarations.

> Most urgently, I would expect an option to skip checking ordinary pre- and 
> postconditions, without skipping those that explicitly raise some 
> exceptions. The point is, these two forms of precondition are semantically 
> totally different:
> 
> The semantic of a plain precondition is essentially:
> 
>    * If I a True, the postcondition will hold.
>      Never call the subprogram if I am false!!!
>      Otherwise, the subprogram might do anything.
> 
> This is a precondition in the "Design by Contract" sense.

This is just a precondition in any sense. It reads: the program P is
correct only if pre(P). Nothing else.
 
> The semantic of a precondition with something like "else raise 
> This_Exception" is
> 
>    * If I am True, the postcondition will hold
>      when the subprogram terminates.
>      If I am False, the subprogram will not do anything,
>      except for raising This_Exception.

This is not a precondition this is a contracted behavior that mandates
certain kind of output (exception) for certain kind of input (unsorted
array).
 
-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 12:16                       ` Dmitry A. Kazakov
@ 2015-05-07 18:00                         ` Stefan.Lucks
  2015-05-07 19:01                           ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-07 18:00 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 3647 bytes --]

On Thu, 7 May 2015, Dmitry A. Kazakov wrote:

>> Here, I heavily disagree. Often, checking relevant properties is much to
>> expensive to perform the checks them in production code.
>
> Merely a confusion of checks with code. There is no such thing as runtime
> correctness check,

Agreed!

At best, a run time check is some sort of a testing harness. It is useless 
to show the correctness of a program, but it can help to discover bugs.

>> A simple example is binary search over a sorted array. The precondition
>> requires the array to be sorted.
>
> And what does it mean for the behavior? If unsorted input is *valid* and
> *must* raise exception (contracted behavior) then you cannot remove code
> *implementing* this behavior.
>
> Consider this client program:

[...] I'll slightly rewrite your client program -- I believe, the 
exception handler was at the wrong place.

   declare
     X : Extended_Index_Type;
   begin
     X := Search (Data, Key);
   exception
       when Not_Sorted_Error | Assertion_Error =>
          X := Search (Sort (Data), Key);
   end;

> Is this code correct?

It depends on the specification of "Sort".

Specification 1:

   function Search(Data: Array_Type; Key: Value_Type)
                      return Extended_Index_Type with
     pre  => Sorted(Data),
     post => (if Data(Search'Result) in Data'Range
              then Data(Search'Result)=Value
              else Search'Result = No_Index and then
                   (for all I in Data'Range => Data(I) /= Key));

The expression "Sorted(Data)" is a precondition. Every client which can 
possibly violate the precondition is buggy. Thus, the above code is buggy. 
(Or otherwise, the exception handler is dead code.)

One property of a proper precondition (or postcondition or ...) is that 
disabling the check does not change the functional behaviour of correct 
programs.

Specification 2:

   function Search(Data: Array_Type; Key: Value_Type)
                      return Extended_Index_Type with
     pre  => (Sorted(Data) or else raise Not_Sorted_Error),
     post => (if Data(Search'Result) in A'Range
              then Data(Search'Result)=Value
              else Search'Result = No_Index and then
                   (for all I in Data'Range => Data(I) /= Key));

The expression following "pre" is "contracted behaviour" as you put it. 
The code above is correct, and disabling the check must be prohibited, 
because it would break correct programs. Which is why I wrote the following:

>> Most urgently, I would expect an option to skip checking ordinary pre- and
>> postconditions, without skipping those that explicitly raise some
>> exceptions.

In other words, I want to be able to switch of proper preconditions (and 
postconditions, whatever) without affecting contracted behaviour.

I depreciate the usage of the word "precondition" for the expression 
following "pre" in spec 2. But I will not fight about names.

Furthermore, I am quite happy with Ada allowing to specify contracted 
behaviour, even I would have prefered use an aspect of its own for 
contraced behaviour. The "pre" aspect should better be have been reserved 
for proper preconditions. But this appears too late now. On the other 
hand, it would not be too late to support disabling proper preconditions 
without changing contracted behaviour.

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07  9:12                       ` Dmitry A. Kazakov
  2015-05-07  9:29                         ` Georg Bauhaus
@ 2015-05-07 18:32                         ` Randy Brukardt
  2015-05-08  7:50                           ` Dmitry A. Kazakov
  1 sibling, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-07 18:32 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1ftovlipftuej.p27p4vog2ka6.dlg@40tude.net...
> On Thu, 07 May 2015 11:01:26 +0200, Georg Bauhaus wrote:
>> On 06.05.15 23:07, Randy Brukardt wrote:
>>> In order for proof to
>>> be part of the compiler, the proof language has to be part of the 
>>> language.
>>
>> If the proof language is understood by the compiler, this does not
>> require it to be compilable Ada, or efficiently computable (and
>> terminating in time), I think?
>
> A compiler compiles, evidently...
>
> The fallacy of Randy's conclusion is different. A compiler need not to be
> the compiler from single language.

True enough. I was going to refute that in my original message, but I 
thought it would be rambling.

I personally am from the camp that thinks that Ada is the best possible 
language for commicating anything. (Well, other than with other humans, but 
that's the fault of the humans and not the language. ;-) [That's certainly 
true of the expression syntax, there might be some room for improvement in 
declarations.] If something can't be expressed in Ada, it's not worth 
expressing (I see no value to incomputable things).

> The proof language is another language. It cannot be same language because 
> that is inconsistent.

I completely disagree here. If one is required to write separate proof 
instructions, hardly anyone would do it. After all, developing in Ada 
already has a reputation for being slow, doubling that time is never going 
anywhere outside of narrow niches. The value of proof, IMHO, is in finding 
runtime failures at compile time, both because that will generate better 
code, and because earlier detection of errors is better. (This becomes even 
more useful in the face of exception contracts, because absence of an 
exception is then either proved or the program is illegal.) That makes proof 
most valuable when applied to runtime checks, such as Ada 2012 assertions, 
or simply explicit code of a similar form.

I don't find any value in proving the "correctness" of code (whatever that 
means), because the description of what the code does is going to be at 
least as complex as the code itself, and thus at least as susptible to 
errors. You end up writing the program twice in two different languages. 
That might have value in extreme circumstances (loss of life potential), but 
most programs are not like that (my spam filter, for example, or our Ada 
compiler).

So one language for the majority of us doing "normal" things; perhaps two 
languages for extreme cases (but I'm not remotely interested in that - 
thinks like SPARK are terribly limiting).

                                     Randy.





^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 10:06                     ` Stefan.Lucks
  2015-05-07 12:16                       ` Dmitry A. Kazakov
@ 2015-05-07 18:52                       ` Randy Brukardt
  2015-05-07 19:40                         ` Stefan.Lucks
  2015-05-07 21:29                         ` Georg Bauhaus
  1 sibling, 2 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-07 18:52 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.11.1505071103310.14859@debian...
...
>> I don't believe that checks in Ada (of any kind) should ever be turned 
>> off.
>
>Here, I heavily disagree. Often, checking relevant properties is much to
>expensive to perform the checks them in production code.
>
>A simple example is binary search over a sorted array. The precondition
>requires the array to be sorted. If the compiler succeeds in optimising
>the test away, it is equivalent to a static program verifier proving the
>precondition holds when the binary search is called.

Exactly! That's the entire idea; the compiler *should* be doing these 
optimizations, indeed one major branch of static program verification comes 
from enhancing compiler optimizer technology (CodePeer is an example of 
that). I think that technology should simply have stayed in the compiler.

>If the compiler fails to optimise the check away, the execution time goes 
>up
>from logarithmic to linear. If you can live with that, you don't need 
>binary search!

If the compiler fails to optimize the check away, your program is wrong in 
some sense, and you should have gotten an error or warning (depending on the 
compiler mode and exception contracts) to that effect. You ought to fix your 
program (probably by adding an appropriate predicate) so that the check 
*can* be eliminated (or pushed to somewhere where the cost is irrelevant).

>Turning off the checks just hides the problem. There is no value to that, 
>IMHO.
>The semantic of a plain precondition is essentially:
>
>   * If I a True, the postcondition will hold.
>     Never call the subprogram if I am false!!!
>     Otherwise, the subprogram might do anything.

"anything" => erroneous.

>This is a precondition in the "Design by Contract" sense.

Ada 2012 does not have these sorts of preconditions. (We've discussed it 
numerous times.) The theory is that adding a precondition NEVER makes a 
program less safe. Ergo, preconditions can be ignored, but violating an 
ignored precondition is not erroneous in the sense that violating a 
suppressed check is. There's a reason that we didn't call this "suppressing" 
preconditions! Thus, there is no circumstance where the "subprogram might do 
anything".

I've thought a bit about having some additional Assertion_Policy "suppress" 
that would allow the semantics you describe above, but we haven't talked 
about that at the language level.

> A typical example for this would be "the array is sorted" when calling
> binary search. If you call a binary search routine with an unsorted array,
> you will likely get false results. And you deserve the blame!

That's not "anything". That's a well-defined (but useless) result. Be 
careful about your terminology!

> The semantic of a precondition with something like "else raise
> This_Exception" is
>
>   * If I am True, the postcondition will hold
>     when the subprogram terminates.
>     If I am False, the subprogram will not do anything,
>     except for raising This_Exception.

This the meaning of all Ada 2012 preconditions. "This_Exception" just 
defaults to "Assertion_Error".

Anyway, the Assertion_Policy can be changed locally, and the policy in 
effect at the point of the declaration determines what policy is used for 
calls. Plus the policy can be set separately for different kinds of 
assertions. Thus, you can get the effect you want with the existing 
policies, so long as you don't try to write two different kinds of 
assertions on the same subprogram.

Note that there is some debate about the value of the fine-grained policy 
setting, it's unclear that GNAT implements it correctly. If some of their 
customers showed concern about the correct implementation of those rules, 
that certainly would change.

                                        Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 18:00                         ` Stefan.Lucks
@ 2015-05-07 19:01                           ` Randy Brukardt
  2015-05-07 19:29                             ` Niklas Holsti
  2015-05-07 19:55                             ` Dmitry A. Kazakov
  0 siblings, 2 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-07 19:01 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.11.1505071909280.16448@debian...
...
> One property of a proper precondition (or postcondition or ...) is that
> disabling the check does not change the functional behaviour of correct
> programs.

Sure, but this is irrelevant. There are no "correct" programs (in all 
possible senses). How do you know that you have a correct program? If you 
use some prover program, it too may have bugs. Or there might be bad data 
(cosmic rays?) Or the specification might be incomplete. Consider the latest 
Dreamliner issue; that probably wouldn't have been caught by a prover simply 
because no one would have included an appropriate assertion.

Ergo, I don't believe that "proper preconditions" really exist. And in the 
rare cases that they do (perhaps because of an immediately preceding 
postcondition), a compiler would have eliminated them anyway, so you're not 
paying anything for the supposed runtime check. (After all, Ada compilers 
have been aggressively removing checks since 1983; that's nothing new to an 
Ada compiler writer.)

                                            Randy.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:01                           ` Randy Brukardt
@ 2015-05-07 19:29                             ` Niklas Holsti
  2015-05-08 23:16                               ` Randy Brukardt
  2015-05-07 19:55                             ` Dmitry A. Kazakov
  1 sibling, 1 reply; 107+ messages in thread
From: Niklas Holsti @ 2015-05-07 19:29 UTC (permalink / raw)


On 15-05-07 22:01 , Randy Brukardt wrote:
> <Stefan.Lucks@uni-weimar.de> wrote in message
> news:alpine.DEB.2.11.1505071909280.16448@debian...
> ...
>> One property of a proper precondition (or postcondition or ...) is that
>> disabling the check does not change the functional behaviour of correct
>> programs.
>
> Sure, but this is irrelevant. There are no "correct" programs (in all
> possible senses). How do you know that you have a correct program? If you
> use some prover program, it too may have bugs. Or there might be bad data
> (cosmic rays?)

Those are practical problems, not problems of principle. If you would 
take the same attitude to mathematics, you would claim that there are no 
correct theorems. So I disagree with you.

> Or the specification might be incomplete. Consider the latest
> Dreamliner issue; that probably wouldn't have been caught by a prover simply
> because no one would have included an appropriate assertion.

If it was an overflow problem (and not wrap-around of a modular type) 
CodePeer would probably have complained that it could not prove absence 
of overflow. That is, many failures result from violations of general 
assertions that one does not have to write explicitly.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 18:52                       ` Randy Brukardt
@ 2015-05-07 19:40                         ` Stefan.Lucks
  2015-05-08  7:28                           ` Dmitry A. Kazakov
  2015-05-08 22:52                           ` Randy Brukardt
  2015-05-07 21:29                         ` Georg Bauhaus
  1 sibling, 2 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-07 19:40 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1710 bytes --]

On Thu, 7 May 2015, Randy Brukardt wrote:

> If the compiler fails to optimize the check away, your program is wrong in
> some sense, and you should have gotten an error or warning (depending on the
> compiler mode and exception contracts) to that effect.

I am a big fan of correctness proofs, where they are applicable. But 
logically,

   Not(Proven_Correct) /= Proven(Incorrect)

Furthermore, automatic theorem proving can only go so far. I may actually 
know my program to be correct -- and maybe I can even prove it manually. 
Why should the compiler reject my program, or insert useless checks, just 
because it fails to find the proof?

Warning or not I would consider a compiler (or a language) which generates 
linear-time code for binary search badly broken. Rejecting the program 
would be the lesser evil. Which would turn Ada into a new SPARK.

But then, the Ada standard would have to define the underlying theorem 
prover, for compatibility reasons. Else, the same program may be proven 
correct by one prover, where another prover fails.

> You ought to fix your
> program (probably by adding an appropriate predicate) so that the check
> *can* be eliminated (or pushed to somewhere where the cost is irrelevant).

Why do I need to fix the program, if I know it is correct? Just because 
the compiler isn't good enough at theorem proving?

>> Turning off the checks just hides the problem.

*IF* there is a problem at all.

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:01                           ` Randy Brukardt
  2015-05-07 19:29                             ` Niklas Holsti
@ 2015-05-07 19:55                             ` Dmitry A. Kazakov
  2015-05-08 23:24                               ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-07 19:55 UTC (permalink / raw)


On Thu, 7 May 2015 14:01:34 -0500, Randy Brukardt wrote:

> <Stefan.Lucks@uni-weimar.de> wrote in message 
> news:alpine.DEB.2.11.1505071909280.16448@debian...
> ...
>> One property of a proper precondition (or postcondition or ...) is that
>> disabling the check does not change the functional behaviour of correct
>> programs.
> 
> Sure, but this is irrelevant. There are no "correct" programs (in all 
> possible senses). How do you know that you have a correct program?

You don't need to know it, to have it existing. (If a tree falls in a
forest and no one is around to hear it, does it make a sound?)

A correct program exists in the sense that there exist a combination of
characters that represents this program. Whether anybody is capable to
write this combination down is no matter. Moreover, the program exists in
strictly constructive terms. You can enumerate all possible programs and
some them will be correct. Even correctness proof is constructive because
the number of states of any program is finite. So you can always find a
correct program given enough time.

> If you 
> use some prover program, it too may have bugs.

Yes, but every programmer deals with partially correct program, he knows
which parts of the program are correct, which parts are known to be
incorrect and which parts are neither.

> Or there might be bad data (cosmic rays?)

Bad data do not make program incorrect. (If "bad" means "inputs I forgot to
think about" that is lousy problem analysis)

> Or the specification might be incomplete.

Oh, but that does not make its implementation incorrect. Incompleteness of
specification is not the program's property. (Be careful what you wish for,
it might come true)

> Consider the latest 
> Dreamliner issue; that probably wouldn't have been caught by a prover simply 
> because no one would have included an appropriate assertion.

It is an infinite recursion, which is why it is meaningless to define
correctness, as you seem do, as some absolute. There cannot be such thing.
The program is correct when it agrees with the specified behavior. Period.
Whether the specification is itself correct, or if behavior is desired,
legal, moral, safe, haram etc is outside the scope.

> Ergo, I don't believe that "proper preconditions" really exist.

In the sense you try to put into it, yes, they do not exist. They do not
because you could not formalize it. There is no formal framework where you
could consistently formulate that idea. It is inherently inconsistent.

> And in the 
> rare cases that they do (perhaps because of an immediately preceding 
> postcondition), a compiler would have eliminated them anyway, so you're not 
> paying anything for the supposed runtime check. (After all, Ada compilers 
> have been aggressively removing checks since 1983; that's nothing new to an 
> Ada compiler writer.)

The point is that these are fundamentally different things. Correctness is
outside. Behavior is inside. You cannot convert one into another keeping
the program. Checks elimination is strictly about behavior, merely a
program transformation, optimization. Correctness evaluation is not
behavior, it is not the program at all.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 18:52                       ` Randy Brukardt
  2015-05-07 19:40                         ` Stefan.Lucks
@ 2015-05-07 21:29                         ` Georg Bauhaus
  2015-05-08 23:11                           ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-07 21:29 UTC (permalink / raw)


On 07.05.15 20:52, Randy Brukardt wrote:
> If the compiler fails to optimize the check away, your program is wrong in
> some sense, and you should have gotten an error or warning (depending on the
> compiler mode and exception contracts) to that effect.

With respect, this attitude towards manipulating a program's
meaning during translation makes a compiler assume responsibility
where it really is the programmers' responsibilities (as per
contract), as expressed in clauses like Pre and Post.

Why then is the *program* somehow buggy, as you say,
because some compiler's optimizers can't follow the math that
has been done already, and expressed as a truth in Pre?

As Stefan Lucks explains in his reply, why would a compiler
override what the programmer has stated as a proven truth?
That's not Ada.  That's more like a compiler getting in the way.

I think that

   Pre'__unchecked__ =>

might be in order, then, with the understanding that it's not
real, but conveying the idea.

Thus, if the compiler puts checks where the programmer has
shown they are superfluous, that's not Ada. At least it used
not to be like that.

  "Design your program by obeying our optimizer, be defensive,
   don't bother with logic and proofs! We'll take care.
  "Doing so prevents bugs (if possible)."

That's not Ada. That's another sales strategy.

Are we supposed to forget such contracts entirely because this kind
of formally proven programming is, as you say, not meant by Ada's
new "contracts"?

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:40                         ` Stefan.Lucks
@ 2015-05-08  7:28                           ` Dmitry A. Kazakov
  2015-05-08 22:58                             ` Randy Brukardt
  2015-05-08 22:52                           ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-08  7:28 UTC (permalink / raw)


On Thu, 7 May 2015 21:40:27 +0200, Stefan.Lucks@uni-weimar.de wrote:

> But then, the Ada standard would have to define the underlying theorem 
> prover, for compatibility reasons. Else, the same program may be proven 
> correct by one prover, where another prover fails.

Yes this is a huge problem in my eyes. If you make program legality
dependent on the prover you will have false negatives of worst kind: proven
correct programs rejected because the standard falsely thought this cannot
be proven.

>> You ought to fix your
>> program (probably by adding an appropriate predicate) so that the check
>> *can* be eliminated (or pushed to somewhere where the cost is irrelevant).
> 
> Why do I need to fix the program, if I know it is correct? Just because 
> the compiler isn't good enough at theorem proving?

Right, that is. Consider exceptions as an example. Let P promises not to
raise E. If you cannot prove that P is illegal.

A think a possible way out is a construct (a pragma) that would allow the
programmer to give his word for certain parts of the proof, e.g. some kind
of weak axioms. The prover would try to prove without them and if that
fails with them (giving a warning). Only if both fails then the program is
incorrect.

>>> Turning off the checks just hides the problem.
> 
> *IF* there is a problem at all.

Sometimes checks fire without any problem, e.g. accessibility checks. In
many cases they fail in correct programs. What they actually catch is not
the program's behavior being wrong, they do the program design. It is far
to late for run time to complain about the design! This is same as with
dynamic predicates. They try to deal with something that fundamentally
cannot be dealt with at run time.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 18:32                         ` Randy Brukardt
@ 2015-05-08  7:50                           ` Dmitry A. Kazakov
  2015-05-08 23:31                             ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-08  7:50 UTC (permalink / raw)


On Thu, 7 May 2015 13:32:28 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1ftovlipftuej.p27p4vog2ka6.dlg@40tude.net...

>> The proof language is another language. It cannot be same language because 
>> that is inconsistent.
> 
> I completely disagree here. If one is required to write separate proof 
> instructions, hardly anyone would do it.

Still it cannot be the same language. But many, including me, would gladly
add proof instructions to critical parts of the program. And this should
not preclude sharing same source files for both languages.

> After all, developing in Ada 
> already has a reputation for being slow, doubling that time is never going 
> anywhere outside of narrow niches.

It would meet much more acceptance if adding proof would not be obligatory.
I see it as a great advantage too. "Write first, prove later" might sound
not much like Ada attitude, but I am sure most programmers will gladly
embrace this approach. Most programmers do code cleanup, review,
beautification anyway. Adding proofs naturally fits into this phase.

> The value of proof, IMHO, is in finding 
> runtime failures at compile time, both because that will generate better 
> code, and because earlier detection of errors is better. (This becomes even 
> more useful in the face of exception contracts, because absence of an 
> exception is then either proved or the program is illegal.)

Yes.

> I don't find any value in proving the "correctness" of code (whatever that 
> means), because the description of what the code does is going to be at 
> least as complex as the code itself, and thus at least as susptible to 
> errors.

It adds another layer of safety. It also gives another perspective on the
program's overall design. Proofs consider the program as a whole with all
possible states of, while run time checks and tests deal with only a very
small subset of states.

> You end up writing the program twice in two different languages.

That is the idea. But it is not *the* program. The program is written only
once in Ada. The proofs are written *for* this program, they do not change
the program itself. Though proofs may hint the programmer to redesign
certain parts of the program.

> So one language for the majority of us doing "normal" things; perhaps two 
> languages for extreme cases (but I'm not remotely interested in that - 
> thinks like SPARK are terribly limiting).

That is because of all or nothing attitude. Proofs must be really optional.
They should not be required in upfront design.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:40                         ` Stefan.Lucks
  2015-05-08  7:28                           ` Dmitry A. Kazakov
@ 2015-05-08 22:52                           ` Randy Brukardt
  2015-05-09  0:14                             ` Paul Rubin
  2015-05-11 10:35                             ` Stefan.Lucks
  1 sibling, 2 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 22:52 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.11.1505072052570.16448@debian...
> On Thu, 7 May 2015, Randy Brukardt wrote:

...
>> You ought to fix your
>> program (probably by adding an appropriate predicate) so that the check
>> *can* be eliminated (or pushed to somewhere where the cost is 
>> irrelevant).
>
>Why do I need to fix the program, if I know it is correct? Just because
>the compiler isn't good enough at theorem proving?

Because there is no such thing. There is no real way for you to know that it 
is correct. I have plenty of examples of supposedly correct programs that 
turned out to have serious bugs.

The only way for a program to be "correct" is for it to be proven that way 
by the compiler. (And even then, the compiler algorithm might have 
problems.) Because if any other tool does it, the compiler might disagree 
(because either tool has bugs), and thus the actual code might still in fact 
be incorrect.

>>> Turning off the checks just hides the problem.
>
>*IF* there is a problem at all.

See above. There is *always* a problem; the only question is whether you are 
willing to defend against it or not.

For example, in this "Is_Sorted" example, if you are assuming that some 
object is sorted, then it should have a predicate to that effect. In such a 
case, the compiler would be able to eliminate all of the precondition checks 
on calls, the only time the predicate would be checked is when the object is 
initially created (which certainly should not be on any critical path), or 
if it is modified by a routine that doesn't include Is_Sorted in its 
postcondition (which is clearly the source of potential bugs as well).

In the absence of those sorts of things, you are just hiding potential 
flaws.

                            Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08  7:28                           ` Dmitry A. Kazakov
@ 2015-05-08 22:58                             ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 22:58 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:mnn1t6yjuegl$.2tsyoc7gm7rk.dlg@40tude.net...
> On Thu, 7 May 2015 21:40:27 +0200, Stefan.Lucks@uni-weimar.de wrote:
>
>> But then, the Ada standard would have to define the underlying theorem
>> prover, for compatibility reasons. Else, the same program may be proven
>> correct by one prover, where another prover fails.

It's certainly true that there is a potential portability problem here, but 
I think we have no choice but to allow it. Otherwise, we have to force these 
sorts of things into warnings, which means that they have no real force and 
worse that the Standard cannot talk about them. In such a case, programming 
will never get better.

> Yes this is a huge problem in my eyes. If you make program legality
> dependent on the prover you will have false negatives of worst kind: 
> proven
> correct programs rejected because the standard falsely thought this cannot
> be proven.

No, the problem is in portability. The standard cannot get involved in what 
can and cannot be proven (other, perhaps than setting some minimum 
requirements). Beyond that, through, it has to be implementation-defined. So 
the question boils down to do we allow one compiler to reject a program 
because it can't be proved that it does not raise an exception (for example) 
while another compiler allows it because it can prove that? I think we HAVE 
to allow that sort of non-portability; for one thing, it gives vendors a 
serious incentive to improve their compilers. On top of which, it is idiocy 
to require a compiler to reject something that it can tell is not a problem. 
(Particularly, something optional like exception contracts.)

I expect this to be the defining question of Ada 202x; we can't have 
exception contracts without deciding this question somehow, and I don't see 
much possible advancement for Ada without those contracts.

                            Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 21:29                         ` Georg Bauhaus
@ 2015-05-08 23:11                           ` Randy Brukardt
  2015-05-08 23:19                             ` tmoran
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 23:11 UTC (permalink / raw)


"Georg Bauhaus" <bauhaus@futureapps.invalid> wrote in message 
news:miglcs$6gp$1@dont-email.me...
> On 07.05.15 20:52, Randy Brukardt wrote:
...
> Why then is the *program* somehow buggy, as you say,
> because some compiler's optimizers can't follow the math that
> has been done already, and expressed as a truth in Pre?

There is no "truth" in Pre; it's just part of the description of the meaning 
of the program. It's madness to assume anything more, you WILL be burned.

And there is little value (IMHO) to separate proofs. At best, you're proving 
what the separate tool *thinks* the semantics should be. Whereas the 
compiler actually *knows* what the semantics are.

Separate tools like SPARK have value today because compilers (and most 
languages!) aren't smart enough to be able to apply proving technology to 
the generation of programs. (Mostly beecause of performance concerns.) But 
that should change over time, and there shouldn't be any reason to keep them 
separate.

It's possible of course that I've reached my expiration date in terms of 
where Ada (and programming languages in general) need to go. Especially as 
most code is mashed-up today and as such is barely functional -- correctness 
is irrelevant when it barely meets any need. But then I (and most of us, in 
fact) have no future (and I worry about the future of humanity as a whole).

                         Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:29                             ` Niklas Holsti
@ 2015-05-08 23:16                               ` Randy Brukardt
  2015-05-09  5:18                                 ` Niklas Holsti
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 23:16 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:cr1snsF60hoU1@mid.individual.net...
> On 15-05-07 22:01 , Randy Brukardt wrote:
>> <Stefan.Lucks@uni-weimar.de> wrote in message
>> news:alpine.DEB.2.11.1505071909280.16448@debian...
>> ...
>>> One property of a proper precondition (or postcondition or ...) is that
>>> disabling the check does not change the functional behaviour of correct
>>> programs.
>>
>> Sure, but this is irrelevant. There are no "correct" programs (in all
>> possible senses). How do you know that you have a correct program? If you
>> use some prover program, it too may have bugs. Or there might be bad data
>> (cosmic rays?)
>
> Those are practical problems, not problems of principle. If you would take 
> the same attitude to mathematics, you would claim that there are no 
> correct theorems. So I disagree with you.

All programming is practical. We do not care about theorems, only that the 
current program is correct in the current environment. Everything has to be 
reproved when anything changes (another good reason for putting it into the 
compiler, as skipping the step isn't possible, and thus problems like the 
Ariene 5 don't happen).

It seems that most of you here are infected with the "theory" disease. I 
want to make practical programming better, and I don't give a damn about any 
stupid theories. Maybe I'm just getting crazy in my late middle age. :-)

                                                  Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 23:11                           ` Randy Brukardt
@ 2015-05-08 23:19                             ` tmoran
  0 siblings, 0 replies; 107+ messages in thread
From: tmoran @ 2015-05-08 23:19 UTC (permalink / raw)



> most code is mashed-up today and as such is barely functional -- correctness
> is irrelevant when it barely meets any need. But then I (and most of us, in
> fact) have no future (and I worry about the future of humanity as a whole).
  Lousy programming will delay the AI takeover. ;)

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-07 19:55                             ` Dmitry A. Kazakov
@ 2015-05-08 23:24                               ` Randy Brukardt
  2015-05-09  5:47                                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 23:24 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:12gipcoheoucz.15ckvrgi11h3a.dlg@40tude.net...
> On Thu, 7 May 2015 14:01:34 -0500, Randy Brukardt wrote:
...
> It is an infinite recursion, which is why it is meaningless to define
> correctness, as you seem do, as some absolute. There cannot be such thing.

I totally agree; it doesn't make any sense to talk about a "correct 
program", or for that matter, a correct part of a program.

It can only be correct in a particular environment with particular code 
generated. The only tool that could possibly know enough of this is the 
compiler -- no other tool could come close.

...
> The point is that these are fundamentally different things. Correctness is
> outside. Behavior is inside.

The only thing that matters is behavior. I care about correctness of the 
behavior (which for Ada, mainly boils down to how and when exceptions are 
possible).

There is no such thing as a correct program (or part of a program) outside 
of its behavior. It's madness to even talk about it. Programming languages 
aren't defined well enough for the concept to be remotely meaningful.

                             Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08  7:50                           ` Dmitry A. Kazakov
@ 2015-05-08 23:31                             ` Randy Brukardt
  2015-05-09  6:16                               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-08 23:31 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:8u3gz9fsuax7$.se3784kmjgvm$.dlg@40tude.net...
> On Thu, 7 May 2015 13:32:28 -0500, Randy Brukardt wrote:
...
>> You end up writing the program twice in two different languages.
>
> That is the idea. But it is not *the* program. The program is written only
> once in Ada. The proofs are written *for* this program, they do not change
> the program itself. Though proofs may hint the programmer to redesign
> certain parts of the program.

That's madness. If the description is sufficiently complete to describe the 
program (and it has to be for the proofs to mean much), then one could have 
generated the program from that description and forget the Ada altogether 
(and most people would do exactly that). And there are few applications 
where there is any value to writing the program twice; for most, it is just 
wasted time. *Especially* if you're adding them after the fact -- what would 
the point be? The program is already debugged and fielded. The only benefit 
to any of these things is in shortening the debugging time, but once that's 
already done, further work is pointless. (We had a rule at RRS about not 
futzing with working code, because we'd been burned by that many times. It 
was necessary to prevent people like me wasting lots of time on "cleanups" 
that ultimately didn't clean up anything. As Tucker likes to say, you can 
move the bump under the rug, but its rare that you can get rid of it.)

                           Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 22:52                           ` Randy Brukardt
@ 2015-05-09  0:14                             ` Paul Rubin
  2015-05-12  0:30                               ` Randy Brukardt
  2015-05-12  0:36                               ` Randy Brukardt
  2015-05-11 10:35                             ` Stefan.Lucks
  1 sibling, 2 replies; 107+ messages in thread
From: Paul Rubin @ 2015-05-09  0:14 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> For example, in this "Is_Sorted" example, if you are assuming that some 
> object is sorted, then it should have a predicate to that effect. 

Randy, is that a real possibility with SPARK--to have a predicate that
says that an array is sorted and have it verified at compile time by the
compiler?  I thought SPARK mostly was for proving rather simpler
conditions.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 23:16                               ` Randy Brukardt
@ 2015-05-09  5:18                                 ` Niklas Holsti
  2015-05-12  0:15                                   ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Niklas Holsti @ 2015-05-09  5:18 UTC (permalink / raw)


On 15-05-09 02:16 , Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message
> news:cr1snsF60hoU1@mid.individual.net...
>> On 15-05-07 22:01 , Randy Brukardt wrote:
>>> <Stefan.Lucks@uni-weimar.de> wrote in message
>>> news:alpine.DEB.2.11.1505071909280.16448@debian...
>>> ...
>>>> One property of a proper precondition (or postcondition or ...) is that
>>>> disabling the check does not change the functional behaviour of correct
>>>> programs.
>>>
>>> Sure, but this is irrelevant. There are no "correct" programs (in all
>>> possible senses). How do you know that you have a correct program? If you
>>> use some prover program, it too may have bugs. Or there might be bad data
>>> (cosmic rays?)
>>
>> Those are practical problems, not problems of principle. If you would take
>> the same attitude to mathematics, you would claim that there are no
>> correct theorems. So I disagree with you.
>
> All programming is practical. We do not care about theorems, only that the
> current program is correct in the current environment.

You have lost me here: you want to make the compiler prove things, but 
you do not care about theorems, and do not want to be infected with a 
"theory" disease? Is Humpty Dumpty involved somehow, making words mean 
whatever he wants them to mean? :-)

> Everything has to be reproved when anything changes

Agreed.

> (another good reason for putting it into the compiler, as skipping the step
> isn't possible, and thus problems like the Ariene 5 don't happen).

It is true that an optimizing and run-time-check-removing compiler has 
to make the same kind of analyses and have the same kind of 
understanding of the program's semantics as a program-proving tool. But 
constructing a proof from scratch is expensive, unpredictably expensive, 
and possibly non-terminating, while compilation of source code into 
machine code should take a time that is reasonable, and reasonably 
predictable.

Hitherto, compilers have been expected to remove run-time checks that 
the compiler can prove to itself are redundant, but not to remove *all* 
redundant run-time checks. If the latter is required, compilation time 
becomes unlimited -- a practical problem, no?

To make proof a routine part of compilation, it has IMO to be reduced to 
*proof checking*. Checking a proof is fast and terminating.

To integrate proof-checking with compilation, the programming language 
has to be able to express the proof (axioms, lemmas, individual proof 
steps) interwined with the expression of the computation that is to be 
proved. And this has to be so easy that it tempts the programmer to 
write the proof -- or at least enough of the proof to guide the compiler 
-- as a routine part of creating the program. (Echoes here of 
proof-carrying code, http://en.wikipedia.org/wiki/Proof-carrying_code.)

Ada has always had such features -- principally types, subtypes, ranges 
-- and Ada 2012 has added more -- pre/post-conds and invariants. 
However, I'm not sure if the features are yet sufficient to let us 
require, in the Ada standard, that an Ada compiler should be able to 
prove (rather, to check) exception-freeness, or termination, just to 
give two examples.

I believe it is a good goal for evolving Ada, but of course not the only 
goal.

By the way, if exception contracts are added to Ada, termination 
contracts should be considered, too.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 23:24                               ` Randy Brukardt
@ 2015-05-09  5:47                                 ` Dmitry A. Kazakov
  0 siblings, 0 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-09  5:47 UTC (permalink / raw)


On Fri, 8 May 2015 18:24:28 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:12gipcoheoucz.15ckvrgi11h3a.dlg@40tude.net...
>> On Thu, 7 May 2015 14:01:34 -0500, Randy Brukardt wrote:
> ...
>> The point is that these are fundamentally different things. Correctness is
>> outside. Behavior is inside.
> 
> The only thing that matters is behavior. I care about correctness of the 
> behavior

Everybody does. The issues are

1. what correctness is

2. how correctness can be addressed at the language level

Effectively you say that you don't bother with #2.

> (which for Ada, mainly boils down to how and when exceptions are 
> possible).

Don't you care if 2+2 gives 4 in Ada?
 
> There is no such thing as a correct program (or part of a program) outside 
> of its behavior.

So long correctness is defined in terms of behavior, there is no problem to
have correct programs.

> It's madness to even talk about it. Programming languages 
> aren't defined well enough for the concept to be remotely meaningful.

You don't need it well defined. A program can be correct even if written in
C or Basic. Proving correct /= being correct.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 23:31                             ` Randy Brukardt
@ 2015-05-09  6:16                               ` Dmitry A. Kazakov
  2015-05-12  0:28                                 ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-09  6:16 UTC (permalink / raw)


On Fri, 8 May 2015 18:31:02 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:8u3gz9fsuax7$.se3784kmjgvm$.dlg@40tude.net...
>> On Thu, 7 May 2015 13:32:28 -0500, Randy Brukardt wrote:
> ...
>>> You end up writing the program twice in two different languages.
>>
>> That is the idea. But it is not *the* program. The program is written only
>> once in Ada. The proofs are written *for* this program, they do not change
>> the program itself. Though proofs may hint the programmer to redesign
>> certain parts of the program.
> 
> That's madness. If the description is sufficiently complete to describe the 
> program (and it has to be for the proofs to mean much), then one could have 
> generated the program from that description and forget the Ada altogether 
> (and most people would do exactly that).

Except that no description ever complete. A more important thing is that
you could not convert such descriptions into code.

As an example consider a definition of sine from a text book. It
sufficiently completely defines sine, but gives you no idea what sin(0.76)
is.

The difference is in the computational framework. The behavior descriptions
are intended for the correctness prover. They program the prover, nothing
else. The behavior definitions (the program proper) are intended for the
machine. They program the machine. (The definitions from the text book are
for the framework which is not even computable)

> And there are few applications 
> where there is any value to writing the program twice; for most, it is just 
> wasted time.

Nothing is written twice. Proof specifications are as much program as test
code is. Don't you write tests?

> *Especially* if you're adding them after the fact -- what would 
> the point be? The program is already debugged and fielded.

Most interesting programs cannot be debugged. It is quite strange, you
don't believe correct program exist, but trust "debugged" programs do!
(:-))

> The only benefit 
> to any of these things is in shortening the debugging time, but once that's 
> already done, further work is pointless. (We had a rule at RRS about not 
> futzing with working code, because we'd been burned by that many times. It 
> was necessary to prevent people like me wasting lots of time on "cleanups" 
> that ultimately didn't clean up anything. As Tucker likes to say, you can 
> move the bump under the rug, but its rare that you can get rid of it.)

Adding proofs meant to happen much earlier than that. But, surely, I would
gladly add proofs to already deployed code. It is always worth to look
after potential problems.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-08 22:52                           ` Randy Brukardt
  2015-05-09  0:14                             ` Paul Rubin
@ 2015-05-11 10:35                             ` Stefan.Lucks
  2015-05-11 21:49                               ` vincent.diemunsch
  2015-05-12  1:03                               ` Randy Brukardt
  1 sibling, 2 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-11 10:35 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 5701 bytes --]

On Fri, 8 May 2015, Randy Brukardt wrote:

>>>> Turning off the checks just hides the problem.
>>
>> *IF* there is a problem at all.
>
> See above. There is *always* a problem; the only question is whether you are
> willing to defend against it or not.

Yes, and no. Having worked a bit with SPARK, my experience is mixed. When 
the tool fails to verify my program, the investigation sometimes actually 
reveal a subtle bug, and I am happy to have used the tool. But quite as 
often, the verifier just fails at some simple and indisputable facts, such 
as proving that ((p-1) + 1) mod p is zero. Recently, when trying to write 
a sorting procedure in SPARK, the prover has been unable to figure out 
that, given an array A and two indices i /= j in the proper range, the new 
array I get from A by swapping A(i) and A(j) (without changing any of the 
values A(k) for k not in {i,j}), is a permutation of A. I haven't yet 
solved the problem.

So the ability of any of the tools we have now (and, I would argue, in the 
foreseeable future) to prove program correctness is very limited. If a 
compiler with such limited abilities turns my logarithmic-time search 
routine into a linear-time routine, just because it couldn't prove that 
the input to the routine is always sorted, then the compiler is broken.

Proving program properties (apparently you don't like the word 
"correctness" in that context) is a *tool* for the programmer. If properly 
used, it can be an extremely useful tool, especially for medium- and 
high-assurance software. But doing foolish things if the proof fails, or 
strictly requiring all relevant properties must actually be proven would 
turn this tool from something useful into a terrible burden.

> For example, in this "Is_Sorted" example, if you are assuming that some
> object is sorted, then it should have a predicate to that effect. In such a
> case, the compiler would be able to eliminate all of the precondition checks
> on calls, the only time the predicate would be checked is when the object is
> initially created (which certainly should not be on any critical path), or
> if it is modified by a routine that doesn't include Is_Sorted in its
> postcondition (which is clearly the source of potential bugs as well).

OK, maybe this would work for Is_Sorted. (I am a bit sceptical, but never 
mind.)

Here is another example. Assume some Big-Natural-Number-Type Num. I need a 
function that tells me, if a number N of type Num is a prime or not:

    function Is_Prime(N: Num) return Boolean is
       (Is_Smooth_Prime(N, To_Num(2));

    function Is_Rough_Prime(N, Lowest_Factor: Num) return Boolean is
       (if Lowest_Factor = N then True
        elsif N mod Lowest/Factor = To_Num(0) then False
        else Is_Rough_Prime(N, Lowest_Factor+1):

If N is divisible by either of 2, 3, 4, 5, ..., N-1, then N isn't prime. 
If N is divisible by neither, it is prime.

So the above specifies properly what I want to have. But this is also 
terribly slow. (And, depending on the compiler's ability to optimize for 
storage, it could also overrun the stack.) If N is of size of interest for 
cryptographic applications, the sun will long have gone dark before the 
program finishes. (Assuming your computer would continue to run for that 
long.) I.e., this is a valid specification, but a useless test.

In practice, people usually call the Miller_Rabin primality test:

    function MR_Is_Prime(N: Num; Repetitions: Natural := 500)
             return Boolean with
      post => Miller_Rabin_Prime'Result = Is_Prime(N);

As a specification, the postcondition is useful. For testing with tiny 
numbers, it might be OK. But for testing with realistically-sized N, or 
for production code, this test *must* be deactivated. The user cannot wait 
for Is_Prime to terminate.

By your logic, disabling the test would be bad. Thus, the compiler would 
eventually have to prove the fact that the Miller-Rabin test is 
mathematically correct, and always gives the proper result, right?

But proving such properties is mathematically deep, and way beyond the 
abilities of the theorem provers we actually have. (And I would not hold 
my breadth to wait for this to change.)

Even worse, the Miller-Rabin test is a probabilistic test. There is some 
chance that a composed number actually passes the test. The chance is very 
small -- for the default Repetitions, the chance is below 2^{-1000}, so 
neglecting this risk is perfectly OK. But usual tools for program 
correctness (or for program properties) are not able to deal with 
probabilistic results.

> In the absence of those sorts of things, you are just hiding potential
> flaws.

Agreed! But at some point, someone (some human, not the compiler!) has to 
make the choice how to proceed, if a relevant property has not been 
formally proven. Neither simply rejecting the program, nor inserting a 
potentially huge ballast of additional test, is always an option.

The language, the compiler, and other tools, may support you to write good 
software, especially medium- and high-assurance software. Ada has been a 
great tool for that purpose from its very beginning, and Ada 2012 has made 
a significant step forward in that direction. But expecting too much from 
the tools them will very quickly get into your way, and eventually also 
destroy the support for the tools.

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-11 10:35                             ` Stefan.Lucks
@ 2015-05-11 21:49                               ` vincent.diemunsch
  2015-05-11 22:49                                 ` Peter Chapin
  2015-05-12  4:42                                 ` vincent.diemunsch
  2015-05-12  1:03                               ` Randy Brukardt
  1 sibling, 2 replies; 107+ messages in thread
From: vincent.diemunsch @ 2015-05-11 21:49 UTC (permalink / raw)


Hello Stefan,

The correct way of proving is to use SYMBOLIC COMPUTATION : you assume
basic facts on proof function and then let the solver infer that your verification
condition extracted from the code is correct. 

>.................... Recently, when trying to write 
> a sorting procedure in SPARK, the prover has been unable to figure out 
> that, given an array A and two indices i /= j in the proper range, the new 
> array I get from A by swapping A(i) and A(j) (without changing any of the 
> values A(k) for k not in {i,j}), is a permutation of A. I haven't yet 
> solved the problem.

It was easy in Spark 2005 :

--# function Perm (A, B : Array_Type) return Boolean;
--# assume Perm ( T~ [ I => T~(J); J => T~(I) ],  T~);

Then you can prove that A is a permutation of B by
proving that A results from a sequence of permutations
of two elements, starting with B. It is the case in all sorting
algorithm that I know (QuickSort, HeapSort, etc.).

Maybe you can also do that using "Ghost functions" in Spark 2014.

Regards,

Vincent

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-11 21:49                               ` vincent.diemunsch
@ 2015-05-11 22:49                                 ` Peter Chapin
  2015-05-12  4:49                                   ` vincent.diemunsch
  2015-05-12  4:42                                 ` vincent.diemunsch
  1 sibling, 1 reply; 107+ messages in thread
From: Peter Chapin @ 2015-05-11 22:49 UTC (permalink / raw)


On Mon, 11 May 2015, vincent.diemunsch@gmail.com wrote:

> It was easy in Spark 2005 :
>
> --# function Perm (A, B : Array_Type) return Boolean;
> --# assume Perm ( T~ [ I => T~(J); J => T~(I) ],  T~);
>
> Then you can prove that A is a permutation of B by proving that A 
> results from a sequence of permutations of two elements, starting with 
> B. It is the case in all sorting algorithm that I know (QuickSort, 
> HeapSort, etc.).
>
> Maybe you can also do that using "Ghost functions" in Spark 2014.

It is possible.

In the book on SPARK 2014 John McCormick and I recently wrote (available 
later this summer) we show an example of proving a sorting procedure where 
the tools prove both

a) The resulting array is sorted and
b) The resulting array is a permutation of the original array.

We use ghost functions to do this. The proofs can be processed with the 
upcoming SPARK GPL 2015 release, but I'm not sure if they will go through 
with SPARK GPL 2014.

Peter

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-09  5:18                                 ` Niklas Holsti
@ 2015-05-12  0:15                                   ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12  0:15 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:cr5jl8Fee0jU1@mid.individual.net...
> On 15-05-09 02:16 , Randy Brukardt wrote:
>> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message
>> news:cr1snsF60hoU1@mid.individual.net...
>>> On 15-05-07 22:01 , Randy Brukardt wrote:
...
>>> Those are practical problems, not problems of principle. If you would 
>>> take
>>> the same attitude to mathematics, you would claim that there are no
>>> correct theorems. So I disagree with you.
>>
>> All programming is practical. We do not care about theorems, only that 
>> the
>> current program is correct in the current environment.
>
> You have lost me here: you want to make the compiler prove things, but you 
> do not care about theorems, and do not want to be infected with a "theory" 
> disease? Is Humpty Dumpty involved somehow, making words mean whatever he 
> wants them to mean? :-)

I see no value to "theorems" or "theory" per se; the value is in the 
techniques, not in those end results.

In any case, after years of arguing with Dmitry here, I've learned to take 
on his best approaches to winning an argument. ;-)

...
>> (another good reason for putting it into the compiler, as skipping the 
>> step
>> isn't possible, and thus problems like the Ariene 5 don't happen).
>
> It is true that an optimizing and run-time-check-removing compiler has to 
> make the same kind of analyses and have the same kind of understanding of 
> the program's semantics as a program-proving tool. But constructing a 
> proof from scratch is expensive, unpredictably expensive, and possibly 
> non-terminating, while compilation of source code into machine code should 
> take a time that is reasonable, and reasonably predictable.

I don't see that either is a given. Janus/Ada would run approximately 
forever if we didn't artifically bound the optimization time, and it still 
can take a long time to produce code. If we ever built the link-time code 
generation version, that time would go up by a lot.

As with everything, one can make bad code quickly, or take longer to make 
good code.

Similarly, I don't see any reason that proper proofs should take forever, as 
it is approximately the same problem as optimization and code generation. At 
some point, you give up and decide that something is unprovable. No big 
deal.

> Hitherto, compilers have been expected to remove run-time checks that the 
> compiler can prove to itself are redundant, but not to remove *all* 
> redundant run-time checks. If the latter is required, compilation time 
> becomes unlimited -- a practical problem, no?

If you want truly good code, compilation time should be nearly unlimited. 
But I agree that there is a practical limit, but the same limit applies to a 
proof tool (if you can wait 12 hours for a proof tool, you can wait 12 hours 
for a compilation, too, especially if one can turn that mode off or down, 
just like optimizers).

...
> Ada has always had such features -- principally types, subtypes, ranges --  
> and Ada 2012 has added more -- pre/post-conds and invariants. However, I'm 
> not sure if the features are yet sufficient to let us require, in the Ada 
> standard, that an Ada compiler should be able to prove (rather, to check) 
> exception-freeness, or termination, just to give two examples.

Certainly not yet. One needs exception contracts at a minimum, as otherwise 
one cannot tell between exceptions raised as part of the behavior of a 
subprogram and those which represent bugs.

> I believe it is a good goal for evolving Ada, but of course not the only 
> goal.

I doubt that there is an "only goal", because lots of people have input. I 
happen to think it is the only goal that ulitmately matters, as much of the 
other ideas don't really move the needle in any significant way.

> By the way, if exception contracts are added to Ada, termination contracts 
> should be considered, too.

We already have "blocking" contracts on the drawing board. I don't quite see 
the point of termination contracts, a non-terminating subprogram is wrong 
99.9% of the time. Maybe a "non-termination" contract would make some sense?

                                        Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-09  6:16                               ` Dmitry A. Kazakov
@ 2015-05-12  0:28                                 ` Randy Brukardt
  2015-05-12  8:04                                   ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12  0:28 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:146ggcmq4yw40.17j749rax0sqi$.dlg@40tude.net...
> On Fri, 8 May 2015 18:31:02 -0500, Randy Brukardt wrote:
...
>> And there are few applications
>> where there is any value to writing the program twice; for most, it is 
>> just
>> wasted time.
>
> Nothing is written twice. Proof specifications are as much program as test
> code is. Don't you write tests?

Outside of the ACATS (which is a product unto itself), not often. Sometimes 
I rewrite other people's tests to generalize them or to simplfy them.

Most of my programs are tested in-situ, either with live data (as in the web 
server/mail server) or by using constructed data. (The ACATS essentially 
falls into this category, from the prespective of a compiler.)

I definitely don't write or use unit tests in the majority of cases. It's 
easier to use live data than to figure out some way of getting that data 
correctly initialized in order to do a unit test. (Consider operations on a 
compiler symboltable. In order to unit test those, you have to construct a 
symboltable for them to work on, and that symboltable has to be constructed 
exactly as the compiler will do it - otherwise you end up testing the unit 
test more than anything useful. Since the compiler already knows how to do 
that, we let the compiler do the setup and then debug in place.)

Since testing proves almost nothing about a program's quality, I prefer to 
avoid them (and it) as much as possible. [Probably too much. :-)] I want my 
compiler to detect all of my mistakes before I run anything. That's the 
whole reason I started to use Ada and continue to use Ada. (Plus testing and 
debugging of tests is incredibly frustrating.)

>> *Especially* if you're adding them after the fact -- what would
>> the point be? The program is already debugged and fielded.
>
> Most interesting programs cannot be debugged. It is quite strange, you
> don't believe correct program exist, but trust "debugged" programs do!
> (:-))

Naw, Ada does my debugging. When there's a problem that actually requires 
debugging, it takes forever for it to get fixed. (Sorry, Tero. ;-) My point 
as just that a fielded program already has some amount of testing and fixing 
done to it (we hope), and once that happens, leave it alone until/unless 
there is a problem.

                  Randy.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-09  0:14                             ` Paul Rubin
@ 2015-05-12  0:30                               ` Randy Brukardt
  2015-05-12 18:10                                 ` Paul Rubin
  2015-05-12  0:36                               ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12  0:30 UTC (permalink / raw)


"Paul Rubin" <no.email@nospam.invalid> wrote in message 
news:87pp6a1u9w.fsf@jester.gateway.sonic.net...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>> For example, in this "Is_Sorted" example, if you are assuming that some
>> object is sorted, then it should have a predicate to that effect.
>
> Randy, is that a real possibility with SPARK--to have a predicate that
> says that an array is sorted and have it verified at compile time by the
> compiler?  I thought SPARK mostly was for proving rather simpler
> conditions.

I have no idea, SPARK is not at all my idea of how Ada should work.

Ada 2012 surely does have predicates, and one hopes that Ada 202x has 
exception contracts, blocking contracts, global data contracts, and anything 
else needed to do useful data flow proving. (Most of that is already 
started, but who knows if any of it will get finished?)

                                      Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-09  0:14                             ` Paul Rubin
  2015-05-12  0:30                               ` Randy Brukardt
@ 2015-05-12  0:36                               ` Randy Brukardt
  1 sibling, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12  0:36 UTC (permalink / raw)


"Paul Rubin" <no.email@nospam.invalid> wrote in message 
news:87pp6a1u9w.fsf@jester.gateway.sonic.net...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>> For example, in this "Is_Sorted" example, if you are assuming that some
>> object is sorted, then it should have a predicate to that effect.
>
> Randy, is that a real possibility with SPARK--to have a predicate that
> says that an array is sorted and have it verified at compile time by the
> compiler?  I thought SPARK mostly was for proving rather simpler
> conditions.

BTW, I've heard that the SPARK team is working on using Ada 2012 predicates 
in their proofs. So that might be a possibilty soon even if it is not the 
case today.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-11 10:35                             ` Stefan.Lucks
  2015-05-11 21:49                               ` vincent.diemunsch
@ 2015-05-12  1:03                               ` Randy Brukardt
  2015-05-12  7:21                                 ` Georg Bauhaus
                                                   ` (3 more replies)
  1 sibling, 4 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12  1:03 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.11.1505111113020.12062@debian...
On Fri, 8 May 2015, Randy Brukardt wrote:

>>>>> Turning off the checks just hides the problem.
>>>
>>> *IF* there is a problem at all.
>>
>> See above. There is *always* a problem; the only question is whether you 
>> are
>> willing to defend against it or not.
>
>Yes, and no. Having worked a bit with SPARK, my experience is mixed. When
>the tool fails to verify my program, the investigation sometimes actually
>reveal a subtle bug, and I am happy to have used the tool.

That's the value of those tools: to prove that something is not correct. 
It's just like testing in that way; you can't really prove that the program 
is correct, but you surely can prove that it is not correct.

> But quite as
> often, the verifier just fails at some simple and indisputable facts, such
> as proving that ((p-1) + 1) mod p is zero. Recently, when trying to write
> a sorting procedure in SPARK, the prover has been unable to figure out
> that, given an array A and two indices i /= j in the proper range, the new
> array I get from A by swapping A(i) and A(j) (without changing any of the
> values A(k) for k not in {i,j}), is a permutation of A. I haven't yet
> solved the problem.

These sorts of problems would come up in proving postconditions, but I don't 
see that happening for preconditions.

>So the ability of any of the tools we have now (and, I would argue, in the
>foreseeable future) to prove program correctness is very limited. If a
>compiler with such limited abilities turns my logarithmic-time search
>routine into a linear-time routine, just because it couldn't prove that
>the input to the routine is always sorted, then the compiler is broken.

No, I'd still argue your code is broken. If *you* know that some object is 
always sorted, then *you* should tell the compiler that with an appropriate 
predicate:

    subtype Sorted_Array is Some_Array
        with Dynamic_Predicate => Is_Sorted (Sorted_Array);

   My_Array : Sorted_Array := ...;

Now, whenever My_Array is assigned (as a whole) or passed as a parameter, it 
will be checked for whether it is sorted. That pushes the check to whenever 
the array is created/initialized/modified, which is not going to have any 
effect on your sort routine.

On top of which, the routines that do the 
creation/initialization/modification probably ought to have postconditions 
that the array is sorted as well. In which case, the object also will have 
been previously checked, so the cost will be at the end of those routines. 
And possibly (although unlikely in the particular case), that check could be 
proved away there as well.

> Proving program properties (apparently you don't like the word
> "correctness" in that context)

Right.

> is a *tool* for the programmer. If properly
> used, it can be an extremely useful tool, especially for medium- and
> high-assurance software. But doing foolish things if the proof fails, or
> strictly requiring all relevant properties must actually be proven would
> turn this tool from something useful into a terrible burden.

No real burden, IMHO. The sorts of properties that should be involved should 
be relatively simple to express and thus prove, and not that expensive to 
check. Much like null pointer checks or variant checks in Ada. Turning these 
sorts of things off is silly.

I can see that are some cases where the properties are too expensive to 
verify at runtime. It would be nice if there was a way to turn off those 
(AND ONLY THOSE) properties. But Ada doesn't have that sort of granularity, 
so I wouldn't bother writing them in the first place. (At least not as 
preconditions; most of my programs have extensive tracing/self-checking 
modes that can be enabled unit-by-unit; that's the place for such expensive 
things.)

>> For example, in this "Is_Sorted" example, if you are assuming that some
>> object is sorted, then it should have a predicate to that effect. In such 
>> a
>> case, the compiler would be able to eliminate all of the precondition 
>> checks
>> on calls, the only time the predicate would be checked is when the object 
>> is
>> initially created (which certainly should not be on any critical path), 
>> or
>> if it is modified by a routine that doesn't include Is_Sorted in its
>> postcondition (which is clearly the source of potential bugs as well).
>
> OK, maybe this would work for Is_Sorted. (I am a bit sceptical, but never
> mind.)
>
>Here is another example. Assume some Big-Natural-Number-Type Num. I need a
>function that tells me, if a number N of type Num is a prime or not:
>
>    function Is_Prime(N: Num) return Boolean is
>       (Is_Smooth_Prime(N, To_Num(2));
>
>    function Is_Rough_Prime(N, Lowest_Factor: Num) return Boolean is
>       (if Lowest_Factor = N then True
>        elsif N mod Lowest/Factor = To_Num(0) then False
>        else Is_Rough_Prime(N, Lowest_Factor+1):
>
>If N is divisible by either of 2, 3, 4, 5, ..., N-1, then N isn't prime.
>If N is divisible by neither, it is prime.
>
>So the above specifies properly what I want to have. But this is also
>terribly slow. (And, depending on the compiler's ability to optimize for
>storage, it could also overrun the stack.) If N is of size of interest for
>cryptographic applications, the sun will long have gone dark before the
>program finishes. (Assuming your computer would continue to run for that
>long.) I.e., this is a valid specification, but a useless test.

Correct. If it hurts, don't write that. :-) I don't begin to believe that 
all program properties can be proved. Indeed, there is a major problem in 
that there is no good way to specify which properties that a subprogram does 
*not* affect. There is an infinite number of such properties, so specifying 
them one by one in the postcondition:

     Is_Sorted (Arr) = Is_Sorted (Arr)'Old and ...

is madness. (And even in a small system, there are a lot of properties. 
Consider just the interesting properties of a Vector container. The length, 
capacity, and tampering state all immediately come to mind, and most 
routines change none of them. How to communicate that?)

>In practice, people usually call the Miller_Rabin primality test:
>
>    function MR_Is_Prime(N: Num; Repetitions: Natural := 500)
>             return Boolean with
>      post => Miller_Rabin_Prime'Result = Is_Prime(N);
>
>As a specification, the postcondition is useful.

I disagree, actually. I think any useful postcondition has to be reasonably 
executable. Else there is no difference from a comment, and you would be 
better served using a comment as won't throw out all of the easy checks with 
this silly one.

> For testing with tiny
> numbers, it might be OK. But for testing with realistically-sized N, or
> for production code, this test *must* be deactivated. The user cannot wait
> for Is_Prime to terminate.
>
> By your logic, disabling the test would be bad. Thus, the compiler would
> eventually have to prove the fact that the Miller-Rabin test is
> mathematically correct, and always gives the proper result, right?

No, by my logic, *writing* a test that you can't actually execute is bad, as 
it tells no one anything useful.

Anyway, I answered this complaint earlier in this note, so I won't repeat 
myself.

>But proving such properties is mathematically deep, and way beyond the
>abilities of the theorem provers we actually have. (And I would not hold
>my breadth to wait for this to change.)

If you can't execute it, and you can't prove it, what exact good is it over 
having a comment

  -- Returns True if N is prime.

???

I can't think of any.

> Even worse, the Miller-Rabin test is a probabilistic test. There is some
> chance that a composed number actually passes the test. The chance is very
> small -- for the default Repetitions, the chance is below 2^{-1000}, so
> neglecting this risk is perfectly OK.

Well, obviously that depends on the application.

> But usual tools for program
> correctness (or for program properties) are not able to deal with
> probabilistic results.

For good reason; in some cases, they're simply not acceptable.

>> In the absence of those sorts of things, you are just hiding potential
>> flaws.
>
>Agreed! But at some point, someone (some human, not the compiler!) has to
>make the choice how to proceed, if a relevant property has not been
>formally proven. Neither simply rejecting the program, nor inserting a
>potentially huge ballast of additional test, is always an option.

I don't think anyone would ever want a system that *required* proving 
everything. The important thing IMHO is that you  can find out what wasn't 
proven so you can determine whether to fix it (via additional 
specifications) or whether to ignore it (the obvious case being that the 
unproven case is on a non-executable path).

>The language, the compiler, and other tools, may support you to write good
>software, especially medium- and high-assurance software. Ada has been a
>great tool for that purpose from its very beginning, and Ada 2012 has made
>a significant step forward in that direction. But expecting too much from
>the tools them will very quickly get into your way, and eventually also
>destroy the support for the tools.

True enough. Expecting proof to be anything more than another way to 
determine errors early is the root of the problem. It's useful to know what 
the compiler doesn't prove in order that one can decide to ignore it, but 
clearly ignoring it is the default (just as it is for regular Ada runtime 
checks). No one should ever be forced to change any code unless a proof 
determines that a check *will* fail (or there is a possibility of raising an 
uncontracted exception -- but no one would ever be required to use an 
exception contract).

I want to bring this tool to people that would not go out of their way to 
use it. (I.e., me. :-) That means it has to be in the compiler so that they 
get the benefits automatically, because there is no way I'm going out and 
buying (plus learning) a separate tool to do those things. I suspect that 
many (perhaps most) programmers are like me. Like everyone, I want it all, 
for free, right now. :-) Only Ada comes close, and I just want to make it 
closer.

                                   Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-11 21:49                               ` vincent.diemunsch
  2015-05-11 22:49                                 ` Peter Chapin
@ 2015-05-12  4:42                                 ` vincent.diemunsch
  2015-05-12 14:53                                   ` johnscpg
  1 sibling, 1 reply; 107+ messages in thread
From: vincent.diemunsch @ 2015-05-12  4:42 UTC (permalink / raw)


Oops, my message was incomplete : 

Le lundi 11 mai 2015 23:49:50 UTC+2, vincent....@gmail.com a écrit :
> Hello Stefan,
> 
> The correct way of proving is to use SYMBOLIC COMPUTATION : you assume
> basic facts on proof function and then let the solver infer that your verification
> condition extracted from the code is correct. 
> 
> >.................... Recently, when trying to write 
> > a sorting procedure in SPARK, the prover has been unable to figure out 
> > that, given an array A and two indices i /= j in the proper range, the new 
> > array I get from A by swapping A(i) and A(j) (without changing any of the 
> > values A(k) for k not in {i,j}), is a permutation of A. I haven't yet 
> > solved the problem.
> 
> It was easy in Spark 2005 :
> 
> --# function Perm (A, B : Array_Type) return Boolean;

procedure Swap (T : in out Array_Type; I,J : Index_Type)
--# derives T from T, I, J;
--# post T(I) = T~(J) and T(J) = T~(I) and Perm (T, T~);

Then in the body of Swap :
> --# assume Perm ( T~ [ I => T~(J); J => T~(I) ],  T~);
> 
> Then you can prove that A is a permutation of B by
> proving that A results from a sequence of permutations
> of two elements, starting with B. It is the case in all sorting
> algorithm that I know (QuickSort, HeapSort, etc.).
> 
> Maybe you can also do that using "Ghost functions" in Spark 2014.
> 
> Regards,
> 
> Vincent

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-11 22:49                                 ` Peter Chapin
@ 2015-05-12  4:49                                   ` vincent.diemunsch
  2015-05-12 23:25                                     ` Peter Chapin
  0 siblings, 1 reply; 107+ messages in thread
From: vincent.diemunsch @ 2015-05-12  4:49 UTC (permalink / raw)


Le mardi 12 mai 2015 00:49:40 UTC+2, Peter Chapin a écrit :

> In the book on SPARK 2014 John McCormick and I recently wrote (available 
> later this summer) we show an example of proving a sorting procedure where 
> the tools prove both
> 
> a) The resulting array is sorted and
> b) The resulting array is a permutation of the original array.
> 
> We use ghost functions to do this. 

Tell us more ! Do you use the same basic idea : assuming that the swap function is
an elementary permutation of arrays ? Do you use an assume statement ?

> The proofs can be processed with the 
> upcoming SPARK GPL 2015 release, but I'm not sure if they will go through 
> with SPARK GPL 2014.

What is new with the SPARK GPL 2015 prover ?

Regards,

Vincent

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  1:03                               ` Randy Brukardt
@ 2015-05-12  7:21                                 ` Georg Bauhaus
  2015-05-12 22:08                                   ` Randy Brukardt
  2015-05-12  8:02                                 ` Georg Bauhaus
                                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-12  7:21 UTC (permalink / raw)


On 12.05.15 03:03, Randy Brukardt wrote:
> Consider just the interesting properties of a Vector container. The length,
> capacity, and tampering state all immediately come to mind, and most
> routines change none of them. How to communicate that?

By your own words, wouldn't you just ask for a predicate
that expresses all of that? Something like

   Is_Mostly_Unchanged_A (Vector)

in the positions you find adequate?

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  1:03                               ` Randy Brukardt
  2015-05-12  7:21                                 ` Georg Bauhaus
@ 2015-05-12  8:02                                 ` Georg Bauhaus
  2015-05-12 22:14                                   ` Randy Brukardt
  2015-05-12  8:37                                 ` Stefan.Lucks
  2015-05-12 14:21                                 ` Bob Duff
  3 siblings, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-12  8:02 UTC (permalink / raw)


On 12.05.15 03:03, Randy Brukardt wrote:
> No, I'd still argue your code is broken. If*you*  know that some object is
> always sorted, then*you*  should tell the compiler that with an appropriate
> predicate:
>
>      subtype Sorted_Array is Some_Array
>          with Dynamic_Predicate => Is_Sorted (Sorted_Array);
>
>     My_Array : Sorted_Array := ...;

There is no formal specification of what Is_Sorted should be.
But there should be one, somewhere (other than a comment), even
when the formal specification involves quite a bit.

How is a client of Sorted_Array to know what Is_Sorted should be?

And if the contract has that information hidden in RM style commentary,
how would the cost of keeping Some_Array sorted plus having it checked
be so different from the supplier side cost of checking that assumption?

This sounds like shifting responsibility for commercial reasons, again.



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  0:28                                 ` Randy Brukardt
@ 2015-05-12  8:04                                   ` Dmitry A. Kazakov
  0 siblings, 0 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-12  8:04 UTC (permalink / raw)


On Mon, 11 May 2015 19:28:32 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:146ggcmq4yw40.17j749rax0sqi$.dlg@40tude.net...
>> On Fri, 8 May 2015 18:31:02 -0500, Randy Brukardt wrote:
> ...
>>> *Especially* if you're adding them after the fact -- what would
>>> the point be? The program is already debugged and fielded.
>>
>> Most interesting programs cannot be debugged. It is quite strange, you
>> don't believe correct program exist, but trust "debugged" programs do!
>> (:-))
> 
> Naw, Ada does my debugging. When there's a problem that actually requires 
> debugging, it takes forever for it to get fixed.

That's the problem.

> (Sorry, Tero. ;-) My point 
> as just that a fielded program already has some amount of testing and fixing 
> done to it (we hope), and once that happens, leave it alone until/unless 
> there is a problem.

Either you must have a very high level code quality or you don't care.
Since it is not the latter case, why wouldn't you use C instead? It is the
former. Then we return to the starting point of the discussion. No compiler
and no language, no matter how tedious, can ensure that level upon
compilation. And as you said it is too much burden to require this from
programmers. furthermore, it is simply impossible to achieve in just single
step of developing process. Yet you cannot delay the phase of executable
code forever. Just for pragmatic reasons, most bugs are much simple and
effective to find and eliminate running the code. That is why people buy in
quick-and-dirty approach. Therefore proofs must be optional, their extent
adjustable, the time and place in the developing process up to the
programmer. 

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  1:03                               ` Randy Brukardt
  2015-05-12  7:21                                 ` Georg Bauhaus
  2015-05-12  8:02                                 ` Georg Bauhaus
@ 2015-05-12  8:37                                 ` Stefan.Lucks
  2015-05-12 11:25                                   ` Stefan.Lucks
  2015-05-12 18:39                                   ` Paul Rubin
  2015-05-12 14:21                                 ` Bob Duff
  3 siblings, 2 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-12  8:37 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 4515 bytes --]

On Mon, 11 May 2015, Randy Brukardt wrote:

> <Stefan.Lucks@uni-weimar.de> wrote in message
> news:alpine.DEB.2.11.1505111113020.12062@debian...

>> So the above specifies properly what I want to have. But this is also
>> terribly slow.
[...]
>> I.e., this is a valid specification, but a useless test.
>
> Correct. If it hurts, don't write that. :-)

Firstly, have to write that. Somehow! Otherwise, my documentation would be 
seriously incomplete.

Either I write it as a second-class precondition in the form of a comment, 
or as a fist-class precondition using the "pre" attribute, or something 
similar ("unchecked_pre" or whatever). I'd prefer the second, because the 
compiler will tell me if my precondition is syntactically correct.

Yes, I claim that Ada12-style pre- and postconditions are helpful, even if 
they are neither verified statically nor verified at run time.

I have seen too many informal specifications that didn't fit. As an 
example, consider

   function Divide (Dividend, Divisor: Num) return Num;
   -- Precondition: Y /= To_Num(0).

This example is simple and harmless. One can easily guess that at some 
point of time, the parameter "Y" has been renamed to "Divisor", but the 
comment has not been updated. Similar cases can be a lot more complicated. 
Having a precondition parsed by the compiler is a big win, even if the 
compiler does nothing beyond parsing with the precondition.

Secondly, the compiler or prover may need need such "believe me, this is 
true" properties to verify other properties it *can* verify.

> I don't begin to believe that all program properties can be proved.

Ah! Some of your previous posts left me with the impression that you where 
proposing an "either prove everything, or don't prove anything" approach.

>> In practice, people usually call the Miller_Rabin primality test:
>>
>>    function MR_Is_Prime(N: Num; Repetitions: Natural := 500)
>>             return Boolean with
>>      post => Miller_Rabin_Prime'Result = Is_Prime(N);
>>
>> As a specification, the postcondition is useful.
>
> I disagree, actually. I think any useful postcondition has to be reasonably
> executable. Else there is no difference from a comment,

See above.

> and you would be better served using a comment as won't throw out all of 
> the easy checks with this silly one.

Firstly, see my above writing on comments versus compiler-parsed 
specifications.

Secondly, this property may actually be relevant to statically verify 
other program properties, under the assumption that the "believe me" claim 
is true.

> No, by my logic, *writing* a test that you can't actually execute is bad, as
> it tells no one anything useful.

I think, this is the core point of our disagreement -- or your 
misunderstanding, as I would put it. ;-)

You can check a precondition or a postcondition or an invariant for 
testing purposes. But neither *is* a test, and either can have other 
purposes than just testing.

> I don't think anyone would ever want a system that *required* proving
> everything. The important thing IMHO is that you  can find out what wasn't
> proven so you can determine whether to fix it (via additional
> specifications) or whether to ignore it (the obvious case being that the
> unproven case is on a non-executable path).

We are in violent agreement here.

But this is precisely the reason why I am trying to advertise a 
fine-grained way to separate between things proven at compile time or 
checked at run time (and maybe checked at testing versus also checked in 
production code) from things claimed. I don't care much about the syntax, 
some "unchecked_pre", "unchecked_post" attributes would be perfectly fine.

> I want to bring this tool to people that would not go out of their way to
> use it. (I.e., me. :-) That means it has to be in the compiler so that they
> get the benefits automatically, because there is no way I'm going out and
> buying (plus learning) a separate tool to do those things. I suspect that
> many (perhaps most) programmers are like me. Like everyone, I want it all,
> for free, right now. :-) Only Ada comes close, and I just want to make it
> closer.

Violent agreement, again!

Stefan

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  8:37                                 ` Stefan.Lucks
@ 2015-05-12 11:25                                   ` Stefan.Lucks
  2015-05-12 18:44                                     ` Paul Rubin
  2015-05-18  9:49                                     ` Jacob Sparre Andersen
  2015-05-12 18:39                                   ` Paul Rubin
  1 sibling, 2 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-12 11:25 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1446 bytes --]

On Tue, 12 May 2015, Stefan.Lucks@uni-weimar.de wrote:

>>> So the above specifies properly what I want to have. But this is also
>>> terribly slow.
> [...]
>>> I.e., this is a valid specification, but a useless test.
>> 
>> Correct. If it hurts, don't write that. :-)

> Secondly, the compiler or prover may need need such "believe me, this is 
> true" properties to verify other properties it *can* verify.

Oh, I forgot "thirdly".

Thirdly, even if it would be way too expensive to check this specification 
at run time, it may be possible to verify it statically at run time. But I 
can only find that out by writing the specification and then running my 
tool (the compiler or whatever). Again, it would be a terrible idea for 
the compiler to insert expensive checks just because the theorem prover 
failed to prove the condition.

BTW, simple but run-time expensive conditions are more likely to be proven 
by the tool (at least for SPARK) than equivalent faster algorithms. E.g., 
I'd prefer

   pre => (for I in 2 .. N-1 => (N mod I /=0))

over

   pre => (for I in 2 .. SQRT(N) => (N mod I /=0))

even though the second would be much faster if performed at run time.

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  1:03                               ` Randy Brukardt
                                                   ` (2 preceding siblings ...)
  2015-05-12  8:37                                 ` Stefan.Lucks
@ 2015-05-12 14:21                                 ` Bob Duff
  2015-05-12 22:37                                   ` Randy Brukardt
  3 siblings, 1 reply; 107+ messages in thread
From: Bob Duff @ 2015-05-12 14:21 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> I can see that are some cases where the properties are too expensive to 
> verify at runtime. It would be nice if there was a way to turn off those 
> (AND ONLY THOSE) properties. But Ada doesn't have that sort of granularity, 

Sure it does.  If Is_Sorted is too slow for production use, you can say:

    ... with Predicate => (if Slow_Mode then Is_Sorted(...))

and set the Slow_Mode flag to True for testing.  Also set it to True
when running proof tools.

Alternatively, you can say something like:

    function Sort(X: My_Array) return My_Array with
        Post => (if X'Length <= 20 then Is_Sorted(Sort'Result));

Now calls to Sort can be O(log N) instead of O(N).  And if Sort doesn't
do anything special for arrays longer than 20, the postcondition is
likely to catch any bugs in Sort.

> Correct. If it hurts, don't write that. :-) I don't begin to believe that 
> all program properties can be proved.

Yes, that's obviously true.  Here's a property of GNAT:

    Compared to most compilers (for any language), GNAT usually gives
    better error messages.

Anybody who has used GNAT and a lot of other compilers knows that
property is true.  But nobody can prove it in a mathematical sense,
because there's no way to formalize it.

- Bob


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  4:42                                 ` vincent.diemunsch
@ 2015-05-12 14:53                                   ` johnscpg
  2015-05-13  9:14                                     ` vincent.diemunsch
  0 siblings, 1 reply; 107+ messages in thread
From: johnscpg @ 2015-05-12 14:53 UTC (permalink / raw)


On Tuesday, May 12, 2015 at 5:42:14 AM UTC+1, vincent....@gmail.com wrote:
> Oops, my message was incomplete : 
> 
> Le lundi 11 mai 2015 23:49:50 UTC+2, vincent....@gmail.com a écrit :
> > Hello Stefan,
> > 
> > The correct way of proving is to use SYMBOLIC COMPUTATION : you assume
> > basic facts on proof function and then let the solver infer that your verification
> > condition extracted from the code is correct. 
> > 
> > >.................... Recently, when trying to write 
> > > a sorting procedure in SPARK, the prover has been unable to figure out 
> > > that, given an array A and two indices i /= j in the proper range, the new 
> > > array I get from A by swapping A(i) and A(j) (without changing any of the 
> > > values A(k) for k not in {i,j}), is a permutation of A. I haven't yet 
> > > solved the problem.
> > 
> > It was easy in Spark 2005 :
> > 
> > --# function Perm (A, B : Array_Type) return Boolean;
> 
> procedure Swap (T : in out Array_Type; I,J : Index_Type)
> --# derives T from T, I, J;
> --# post T(I) = T~(J) and T(J) = T~(I) and Perm (T, T~);
> 
> Then in the body of Swap :
> > --# assume Perm ( T~ [ I => T~(J); J => T~(I) ],  T~);
> > 
> > Then you can prove that A is a permutation of B by
> > proving that A results from a sequence of permutations
> > of two elements, starting with B. It is the case in all sorting
> > algorithm that I know (QuickSort, HeapSort, etc.).
> > 
> > Maybe you can also do that using "Ghost functions" in Spark 2014.
> > 
> > Regards,
> > 
> > Vincent

Claire Dross has a post on this .. sounds relevant but
I'm not sure!

http://www.spark-2014.org/entries/detail/manual-proof-in-spark-2014

J.






^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  0:30                               ` Randy Brukardt
@ 2015-05-12 18:10                                 ` Paul Rubin
  2015-05-12 22:01                                   ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Paul Rubin @ 2015-05-12 18:10 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
>> Randy, is that a real possibility with SPARK--to have a predicate that
>> says that an array is sorted 
> I have no idea, SPARK is not at all my idea of how Ada should work.

Hmm, ok, I had been confused on that issue.  

So can you say what the predicate for a sorted array would look like in
Janus/ADA, and how the compiler would verify it?

Someone else mentioned that it can probably be done with Spark 2014
using so-called ghost functions, which is pretty cool, and I believe
surpasses the capabilities of Spark's earlier incarnations. 

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  8:37                                 ` Stefan.Lucks
  2015-05-12 11:25                                   ` Stefan.Lucks
@ 2015-05-12 18:39                                   ` Paul Rubin
  2015-05-12 20:51                                     ` Stefan.Lucks
  1 sibling, 1 reply; 107+ messages in thread
From: Paul Rubin @ 2015-05-12 18:39 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de writes:
> Either I write it as a second-class precondition in the form of a
> comment, or as a fist-class precondition using the "pre" attribute, or
> something similar ("unchecked_pre" or whatever). I'd prefer the
> second, because the compiler will tell me if my precondition is
> syntactically correct.

I'd say the first-class precondition (if executable) is also better
because even if a runtime check (say for an array being sorted) is too
expensive to use in a deployed application, you can still enable the
check during testing and pay the runtime cost, possibly finding some
bugs that way.

>> I disagree, actually. I think any useful postcondition has to be
>> reasonably executable. Else there is no difference from a comment,

Don't you want {pre,post}conditions to be able to quantify over infinite
ranges, and therefore not always be executable?  For example, consider a
compiler optimization pass expressed as a function that takes a code
fragment and returns a new code fragment: the postcondition is that both
fragments have the same semantics, which means for all inputs, both
fragments compute the same result.  That postcondition is not executable
but it is (example: CompCert) potentially provable.  So wanting to state
and prove it is a legitimate desire.

> I don't care much about the syntax, some "unchecked_pre",
> "unchecked_post" attributes would be perfectly fine.

You could have a static precondition that accepts proof by assertion,
like in Coq where you can say "admitted".

>> Like everyone, I want it all, for free, right now. :-) Only Ada comes
>> close, and I just want to make it closer.
> Violent agreement, again!

Do you use other verification systems like Coq?  Are the ones used with
Ada comparable at all?  

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 11:25                                   ` Stefan.Lucks
@ 2015-05-12 18:44                                     ` Paul Rubin
  2015-05-12 20:52                                       ` Stefan.Lucks
  2015-05-18  9:49                                     ` Jacob Sparre Andersen
  1 sibling, 1 reply; 107+ messages in thread
From: Paul Rubin @ 2015-05-12 18:44 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de writes:
>   pre => (for I in 2 .. N-1 => (N mod I /=0))

Of course that still doesn't prove that N is prime: you need a separate
proof that N has no divisors greater than N, and that proof has to
quantify over all the integers, so it's not executable at all.  Stuff
like this is why formal methods are often less convenient than we'd
like.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 18:39                                   ` Paul Rubin
@ 2015-05-12 20:51                                     ` Stefan.Lucks
  0 siblings, 0 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-12 20:51 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 831 bytes --]

On Tue, 12 May 2015, Paul Rubin wrote:

> Stefan.Lucks@uni-weimar.de writes:
> You could have a static precondition that accepts proof by assertion,
> like in Coq where you can say "admitted".

Indeed, this is about the thing I am asking for. SPARK 2014 has a pragma 
for that.

>>> Like everyone, I want it all, for free, right now. :-) Only Ada comes
>>> close, and I just want to make it closer.
>> Violent agreement, again!
>
> Do you use other verification systems like Coq?

Tryin out Coq is on my to-do list. Right now, I am only using SPARK 2014 
and the Alt-Ergo prover.

Stefan

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 18:44                                     ` Paul Rubin
@ 2015-05-12 20:52                                       ` Stefan.Lucks
  0 siblings, 0 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-12 20:52 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 670 bytes --]

On Tue, 12 May 2015, Paul Rubin wrote:

> Stefan.Lucks@uni-weimar.de writes:
>>   pre => (for I in 2 .. N-1 => (N mod I /=0))
>
> Of course that still doesn't prove that N is prime: you need a separate
> proof that N has no divisors greater than N, and that proof has to
> quantify over all the integers, so it's not executable at all.  Stuff
> like this is why formal methods are often less convenient than we'd
> like.

Good Point!

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 18:10                                 ` Paul Rubin
@ 2015-05-12 22:01                                   ` Randy Brukardt
  2015-05-13  9:35                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12 22:01 UTC (permalink / raw)


"Paul Rubin" <no.email@nospam.invalid> wrote in message 
news:877fsd1xb5.fsf@jester.gateway.sonic.net...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>>> Randy, is that a real possibility with SPARK--to have a predicate that
>>> says that an array is sorted
>> I have no idea, SPARK is not at all my idea of how Ada should work.
>
> Hmm, ok, I had been confused on that issue.
>
> So can you say what the predicate for a sorted array would look like in
> Janus/ADA, and how the compiler would verify it?

I answered that somewhere else in this thread (it just uses appropriate 
predicates, usually "Dynamic_Predicates" since static predicates cannot 
contain anything useful for composite types).

                       Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  7:21                                 ` Georg Bauhaus
@ 2015-05-12 22:08                                   ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12 22:08 UTC (permalink / raw)


"Georg Bauhaus" <bauhaus@futureapps.invalid> wrote in message 
news:mis9kh$ief$1@dont-email.me...
> On 12.05.15 03:03, Randy Brukardt wrote:
>> Consider just the interesting properties of a Vector container. The 
>> length,
>> capacity, and tampering state all immediately come to mind, and most
>> routines change none of them. How to communicate that?
>
> By your own words, wouldn't you just ask for a predicate
> that expresses all of that? Something like
>
>   Is_Mostly_Unchanged_A (Vector)
>
> in the positions you find adequate?

That would work, but you'd need a number of them (for all of existent 
combinations of unchanged). But it does nothing for the readability problem 
of cluttering everything with that.

Humm, actually it doesn't work (at least not as you wrote it), because 'Old 
can only be written in a postcondition. (Else the lifetime issues become 
maddening.) You could write:

Is_Mostly_Unchange_A (Vector, Vector'Old)

but that is a lot more expensive than the original, as here the entire 
vector is getting copied for the check, rather than just a bunch of simple 
properties.

In any case, I've been mostly focusing on the precondition/predicate part of 
the proof. The postcondition/type_invariant part is a rather different 
kettle of fish.

                                        Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  8:02                                 ` Georg Bauhaus
@ 2015-05-12 22:14                                   ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12 22:14 UTC (permalink / raw)


"Georg Bauhaus" <bauhaus@futureapps.invalid> wrote in message 
news:misc19$qbp$1@dont-email.me...
> On 12.05.15 03:03, Randy Brukardt wrote:
>> No, I'd still argue your code is broken. If*you*  know that some object 
>> is
>> always sorted, then*you*  should tell the compiler that with an 
>> appropriate
>> predicate:
>>
>>      subtype Sorted_Array is Some_Array
>>          with Dynamic_Predicate => Is_Sorted (Sorted_Array);
>>
>>     My_Array : Sorted_Array := ...;
>
> There is no formal specification of what Is_Sorted should be.

Huh? Is_Sorted is presumably an expression function that someone provided. 
Something like:

   function Is_Sorted (A : Some_Array) return Boolean is
      (A'Length < 2 or else  (for all I in A'First .. T'Pred(A'Last) => A 
(I) <= A (T'Succ (I)));

(where T is the index subtype of Some_Array; I borrowed this from the RM 
example 4.5.8(11/3)).

> But there should be one, somewhere (other than a comment), even
> when the formal specification involves quite a bit.

What's wrong with the above? You could have used the expression instead, but 
the name helps clarify it for the human reader.

It's not like Is_Sorted is going to be some undefined thing; it has to be a 
function declared in Ada; preferably it is an expression function so 
everyone knows how its defined (that's one reason those were added).

                                         Randy. 




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 14:21                                 ` Bob Duff
@ 2015-05-12 22:37                                   ` Randy Brukardt
  2015-05-13  6:58                                     ` Georg Bauhaus
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-12 22:37 UTC (permalink / raw)


"Bob Duff" <bobduff@theworld.com> wrote in message 
news:87egml511m.fsf@theworld.com...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> I can see that are some cases where the properties are too expensive to
>> verify at runtime. It would be nice if there was a way to turn off those
>> (AND ONLY THOSE) properties. But Ada doesn't have that sort of 
>> granularity,
>
> Sure it does.  If Is_Sorted is too slow for production use, you can say:
>
>    ... with Predicate => (if Slow_Mode then Is_Sorted(...))
>
> and set the Slow_Mode flag to True for testing.  Also set it to True
> when running proof tools.

Of course. That's essentially what I've ("we've", really, Isaac created a 
lot of the tracing stuff in Janus/Ada) been doing for years. I just hadn't 
thought of trying to use it directly in the assertions. We'd use a function 
call, though, rather than a constant:

    ... with Dynamic_Predicate => (if JTrace.Trace(Current_Unit) then 
Is_Sorted(...))

When compiled for testing, JTrace.Trace is a function call which returns 
true or false based on the selections from a tracing menu that pops up when 
the tracing options are used. When compiled for production use, Trace is an 
array with all of the elements set to False. (At least that was the idea, I 
don't think we ever used it that way.)

The downside here is a bit more noise, but the upsides are obvious (Stefan 
explained them in gory detail). One probably could design something shorter 
with the same effect (that would cut the noise).

                                     Randy.



>
> Alternatively, you can say something like:
>
>    function Sort(X: My_Array) return My_Array with
>        Post => (if X'Length <= 20 then Is_Sorted(Sort'Result));
>
> Now calls to Sort can be O(log N) instead of O(N).  And if Sort doesn't
> do anything special for arrays longer than 20, the postcondition is
> likely to catch any bugs in Sort.
>
>> Correct. If it hurts, don't write that. :-) I don't begin to believe that
>> all program properties can be proved.
>
> Yes, that's obviously true.  Here's a property of GNAT:
>
>    Compared to most compilers (for any language), GNAT usually gives
>    better error messages.
>
> Anybody who has used GNAT and a lot of other compilers knows that
> property is true.  But nobody can prove it in a mathematical sense,
> because there's no way to formalize it.
>
> - Bob 


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12  4:49                                   ` vincent.diemunsch
@ 2015-05-12 23:25                                     ` Peter Chapin
  2015-05-13  9:00                                       ` vincent.diemunsch
  0 siblings, 1 reply; 107+ messages in thread
From: Peter Chapin @ 2015-05-12 23:25 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 3598 bytes --]

On Mon, 11 May 2015, vincent.diemunsch@gmail.com wrote:

> Le mardi 12 mai 2015 00:49:40 UTC+2, Peter Chapin a écrit :
>
>> In the book on SPARK 2014 John McCormick and I recently wrote 
>> (available later this summer) we show an example of proving a sorting 
>> procedure where the tools prove both
>>
>> a) The resulting array is sorted and
>> b) The resulting array is a permutation of the original array.
>>
>> We use ghost functions to do this.
>
> Tell us more ! Do you use the same basic idea : assuming that the swap 
> function is an elementary permutation of arrays ? Do you use an assume 
> statement ?

Yes, and yes. We do use Assume to describe the necessary mathematical 
properties involved. In particular we Assume the array after swapping two 
elements is a permutation of the original. Similarly we use Assume to 
embody the idea that permutations are transitive: if B is a permutation of 
A and C is a permtuation of B, then C is a permutation of A. This is 
necessary to support a loop invariant asserting that the partially 
processed array is always a permutation of the original.

In our example we leave the ghost functions unimplemented so the tools 
can't learn anything about them other than what we describe in the 
assertions associated with them. This also means the program won't link 
unless it's compiled with an Assertion_Policy of Ignore.

The ghost functions could potentially be implemented with actual code and 
then one might hope the tools could learn enough from them to do without 
some of the Assumes, for example via contextual analysis. However, some 
caution is needed. You don't want to use the sorting procedure to 
implement the permutation checker that you are using to prove the sorting 
procedure!

>> The proofs can be processed with the upcoming SPARK GPL 2015 release, 
>> but I'm not sure if they will go through with SPARK GPL 2014.
>
> What is new with the SPARK GPL 2015 prover ?

Well, SPARK GPL 2015 ships with two theorem provers that it uses together: 
alt-ergo and cvc4 (you can also add others, such as z3). So, by default, 
anything that either prover can prove is seen as handled from the user's 
point of view. Since the provers have different strengths the combination 
gives you better coverage than either one alone. However, it also 
increases the amount of code you have to trust.

There are various other improvements. For example it handles bit 
manipulation operators a lot better than SPARK GPL 2014. Just consider 
this simple case:

     type Octet is mod 2**8;
     type Double_Octet is mod 2**16;

     X : Octet;
     Y : Double_Octet;

...

     X := Octet(Y and 16#00FF);

As I recall SPARK GPL 2014 didn't understand that the masking operation 
chopped the size of the result into an acceptable range for type Octet and 
thus it complained that it couldn't prove there wasn't a range violation. 
SPARK GPL 2015 has no problem with the example above, and other similar 
examples.

Also my experience with using the 2015 version as opposed to the 2014 
version is that it requires a lot less fussing around with "obvious" 
things. Loop invariants, for example, can often be significantly 
simplified or even eliminated. The user experience with it is much better 
overall.

Non-linear floating point computations are still not handled that 
fantastically (I understand it's a hard problem). Thus if you are doing a 
lot of floating point work you may need to wait for more evolutionary 
improvements beyond even SPARK GPL 2015.

Peter

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 22:37                                   ` Randy Brukardt
@ 2015-05-13  6:58                                     ` Georg Bauhaus
  2015-05-14  1:21                                       ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Georg Bauhaus @ 2015-05-13  6:58 UTC (permalink / raw)


On 13.05.15 00:37, Randy Brukardt wrote:
> "Bob Duff" <bobduff@theworld.com> wrote in message
> news:87egml511m.fsf@theworld.com...
>> "Randy Brukardt" <randy@rrsoftware.com> writes:
>>
>>> I can see that are some cases where the properties are too expensive to
>>> verify at runtime. It would be nice if there was a way to turn off those
>>> (AND ONLY THOSE) properties. But Ada doesn't have that sort of
>>> granularity,
>>
>> Sure it does.  If Is_Sorted is too slow for production use, you can say:
>>
>>     ... with Predicate => (if Slow_Mode then Is_Sorted(...))
>>
>> and set the Slow_Mode flag to True for testing.  Also set it to True
>> when running proof tools.
>
> Of course. That's essentially what I've ("we've", really, Isaac created a
> lot of the tracing stuff in Janus/Ada) been doing for years. I just hadn't
> thought of trying to use it directly in the assertions. We'd use a function
> call, though, rather than a constant:
>
>      ... with Dynamic_Predicate => (if JTrace.Trace(Current_Unit) then
> Is_Sorted(...))

Given this fine-grained run-time configuration (another IF and then
a little something like  a debugging thing from an implementation),
is the condition in the same category of expressions as Is_Sorted?

The second, Is_Sorted, is strictly about the parameters, contractual,
so to speak. The first looks rather different and distracting to me.

But in any case, then, maybe having a way of influencing the selection
of checks could be expressed as

    pragma Assertion_Policy (Post => Check and not MR_Is_Prime'Post);

Stipulating that policy_identifier in Assertion_Policy becomes just
a little more flexible by turning the conditional into a portable
feature specifiable outside the contracts, but near them:


    pragma Assertion_Policy(
              assertion_aspect_mark => policy_setting
           {, assertion_aspect_mark => policy_setting});

    policy_setting ::= policy_identifier { and mute_list }
    mute_list ::= not defining_identifier'matching_mark
                {, not defining_identifier'matching_mark }


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 23:25                                     ` Peter Chapin
@ 2015-05-13  9:00                                       ` vincent.diemunsch
  0 siblings, 0 replies; 107+ messages in thread
From: vincent.diemunsch @ 2015-05-13  9:00 UTC (permalink / raw)


Thanks Peter for your response,

> Yes, and yes. We do use Assume to describe the necessary mathematical 
> properties involved. In particular we Assume the array after swapping two 
> elements is a permutation of the original. Similarly we use Assume to 
> embody the idea that permutations are transitive: if B is a permutation of 
> A and C is a permtuation of B, then C is a permutation of A. This is 
> necessary to support a loop invariant asserting that the partially 
> processed array is always a permutation of the original.
> 
> In our example we leave the ghost functions unimplemented so the tools 
> can't learn anything about them other than what we describe in the 
> assertions associated with them. This also means the program won't link 
> unless it's compiled with an Assertion_Policy of Ignore.

That is a sound approach. Since one can then give a proof of the "Assume"
statements in an external document, that describes the matematical foundation
of the program. 

> 
> The ghost functions could potentially be implemented with actual code and 
> then one might hope the tools could learn enough from them to do without 
> some of the Assumes, for example via contextual analysis. However, some 
> caution is needed. You don't want to use the sorting procedure to 
> implement the permutation checker that you are using to prove the sorting 
> procedure!

I agree. I prefer rellying on an external proof checker than doing confused functional manipulations, as ML or OCaml tends to do. 

Regards,

Vincent


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 14:53                                   ` johnscpg
@ 2015-05-13  9:14                                     ` vincent.diemunsch
  0 siblings, 0 replies; 107+ messages in thread
From: vincent.diemunsch @ 2015-05-13  9:14 UTC (permalink / raw)


Le mardi 12 mai 2015 16:53:47 UTC+2, john...@googlemail.com a écrit :

> Claire Dross has a post on this .. sounds relevant but
> I'm not sure!
> 
> http://www.spark-2014.org/entries/detail/manual-proof-in-spark-2014
> 
> J.

Hum...

I find this proof excessivly complicated. The idea of describing the proof
function as a functional language would do it, through "ghost code" result in
the mixing of numerical computation and symbolic manipulations by the same code
as ML on OCaml like to do it.

The problem with this approach is : how can you be sure that your proof is correct if it is hardly readable ?

By the way this is one of the consequences of not keeping logical formulas separate from Ada code : one ends up by writing pseudo Ada code for the sole purpose of handling logical formula. The result is a kind of mess.

I prefer the Peter Chapin approach (see his message below).

Vincent

  


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 22:01                                   ` Randy Brukardt
@ 2015-05-13  9:35                                     ` Dmitry A. Kazakov
  2015-05-13 11:53                                       ` G.B.
  0 siblings, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-13  9:35 UTC (permalink / raw)


On Tue, 12 May 2015 17:01:45 -0500, Randy Brukardt wrote:

> "Paul Rubin" <no.email@nospam.invalid> wrote in message 
> news:877fsd1xb5.fsf@jester.gateway.sonic.net...
>> "Randy Brukardt" <randy@rrsoftware.com> writes:
>>>> Randy, is that a real possibility with SPARK--to have a predicate that
>>>> says that an array is sorted
>>> I have no idea, SPARK is not at all my idea of how Ada should work.
>>
>> Hmm, ok, I had been confused on that issue.
>>
>> So can you say what the predicate for a sorted array would look like in
>> Janus/ADA, and how the compiler would verify it?
> 
> I answered that somewhere else in this thread (it just uses appropriate 
> predicates, usually "Dynamic_Predicates" since static predicates cannot 
> contain anything useful for composite types).

How so? Actually, the "sorted" predicate of an operation is rubbish. The
proper design is a composite type Sorted_Array with an *invariant* that
ensures the array sorted. The invariant must be static, of course, as
everything else regarding correctness.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13  9:35                                     ` Dmitry A. Kazakov
@ 2015-05-13 11:53                                       ` G.B.
  2015-05-13 12:47                                         ` Dmitry A. Kazakov
  2015-05-14  1:32                                         ` Randy Brukardt
  0 siblings, 2 replies; 107+ messages in thread
From: G.B. @ 2015-05-13 11:53 UTC (permalink / raw)


On 13.05.15 11:35, Dmitry A. Kazakov wrote:
> The
> proper design is a composite type Sorted_Array with an*invariant*  that
> ensures the array sorted.

This shifts responsibility.

Also, why does the array need to be sorted during
its entire life time? How do you know that?

A contract for Binary_Search could require that

(a) input data is any plain old array of numbers
(b) in sorted order, by "<="

Why require some fancy abstraction imposed on client code
because the supplier feels it to be the proper design,
reflecting his particular idea of design for his particular
idea of what the job really is, even though it is not his job?

What is the difference between

-  the use of a sorted array (by precondition)
    in one place and

-  the use of a sorted array (by invariant)
    in the very same place,

when there is only this place?

   Binary_Search (It : Num; Data : Ary)
     with Pre => Is_Sorted (Ary);

vs

   type Sorted_Array is private
      with Type_Invariant => Is_Sorted (Sorted_Array);

   Binary_Search (It : Num; Data : Sorted_Ary);

Does every single, isolated algorithm on data warrant
a corresponding type?

There is no context here. So, there is nothing that says
that keeping some array-like private composite in sorted
order would improve the client's design.

If customers should be told how to structure their data
so that the contract matches the supplier's design ideas
why not require that the client keep an index
for the array? Then, the supplier could create his
Binary_Search as Search, based on the properties of customer
built index... Which, then would get a suitable contract ...

I hope the shift in responsibility becomes apparent.

Every contract has to start from some reality. How could
a design be compared to another without knowing the exact
problem?



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 11:53                                       ` G.B.
@ 2015-05-13 12:47                                         ` Dmitry A. Kazakov
  2015-05-13 14:06                                           ` G.B.
  2015-05-14  1:32                                         ` Randy Brukardt
  1 sibling, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-13 12:47 UTC (permalink / raw)


On Wed, 13 May 2015 13:53:59 +0200, G.B. wrote:

> On 13.05.15 11:35, Dmitry A. Kazakov wrote:
>> The
>> proper design is a composite type Sorted_Array with an*invariant*  that
>> ensures the array sorted.
> 
> This shifts responsibility.

Yes, to where it belong to.

> Also, why does the array need to be sorted during
> its entire life time? How do you know that?

From its type, of course.

> A contract for Binary_Search could require that
> 
> (a) input data is any plain old array of numbers
> (b) in sorted order, by "<="

That is not a contract of binary search. That is of a type of which
[binary] search is an operation. Individual contracts of operations is a
mess, tolerated only when it would be too complicated to deal with this at
the type level. Sorted constraint is not such a thing.

> What is the difference between
> 
> -  the use of a sorted array (by precondition)
>     in one place and

Untyped mess

> -  the use of a sorted array (by invariant)
>     in the very same place,

Properly designed software with types reflecting problems space entities.
Note that this would not only ensure no run-time errors. It also will allow
safe searching unsorted arrays. You will convert unsorted array type to
sorted one, which conversion will sort it.

> Does every single, isolated algorithm on data warrant
> a corresponding type?

Certainly so. Operations are defined on types.
 
-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 12:47                                         ` Dmitry A. Kazakov
@ 2015-05-13 14:06                                           ` G.B.
  2015-05-13 14:21                                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: G.B. @ 2015-05-13 14:06 UTC (permalink / raw)


On 13.05.15 14:47, Dmitry A. Kazakov wrote:
> Operations are defined on types.

Or, operations are of a type and Ada has no way to name
this type directly?



^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 14:06                                           ` G.B.
@ 2015-05-13 14:21                                             ` Dmitry A. Kazakov
  2015-05-13 16:33                                               ` G.B.
  0 siblings, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-13 14:21 UTC (permalink / raw)


On Wed, 13 May 2015 16:06:18 +0200, G.B. wrote:

> On 13.05.15 14:47, Dmitry A. Kazakov wrote:
>> Operations are defined on types.
> 
> Or, operations are of a type and Ada has no way to name
> this type directly?

All types have names in Ada.

The problem is when several types from the problem space are lumped into
one language type. That is either lousy design or else language deficiency.
E.g. in C everything is int. Preconditions put on operations promote
C-esque approach to software design.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 14:21                                             ` Dmitry A. Kazakov
@ 2015-05-13 16:33                                               ` G.B.
  2015-05-13 19:15                                                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: G.B. @ 2015-05-13 16:33 UTC (permalink / raw)


On 13.05.15 16:21, Dmitry A. Kazakov wrote:
> On Wed, 13 May 2015 16:06:18 +0200, G.B. wrote:
>
>> On 13.05.15 14:47, Dmitry A. Kazakov wrote:
>>> Operations are defined on types.
>>
>> Or, operations are of a type and Ada has no way to name
>> this type directly?
>
> All types have names in Ada.

But what is the type of an operation, so that conditions
(plural) on them do not create an untyped mess, but are
adequately representing a well designed contract?

Conjecture: Types and operations is like hens and eggs.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 16:33                                               ` G.B.
@ 2015-05-13 19:15                                                 ` Dmitry A. Kazakov
  2015-05-14  1:36                                                   ` Randy Brukardt
  0 siblings, 1 reply; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-13 19:15 UTC (permalink / raw)


On Wed, 13 May 2015 18:33:45 +0200, G.B. wrote:

> On 13.05.15 16:21, Dmitry A. Kazakov wrote:
>> On Wed, 13 May 2015 16:06:18 +0200, G.B. wrote:
>>
>>> On 13.05.15 14:47, Dmitry A. Kazakov wrote:
>>>> Operations are defined on types.
>>>
>>> Or, operations are of a type and Ada has no way to name
>>> this type directly?
>>
>> All types have names in Ada.
> 
> But what is the type of an operation,

Anonymous. Operations are second-class citizens in Ada.

> so that conditions
> (plural) on them do not create an untyped mess, but are
> adequately representing a well designed contract?
> 
> Conjecture: Types and operations is like hens and eggs.

Which is another way to say the operations cannot have independent on types
contracts.

I remind you earlier discussions on whether Positive is a type and about
merits of LSP. Each time you change anything, you create a *new* type. Thus
when you constraint the set of values in an operation, e.g. by putting a
precondition, you create a *new* type that carries this constraint. If you
keep on pretending this new type same to the original type, you depart
strong typing and go to weak typing or no typing. And you will be dearly
punished for that by debugging countless exceptions popping here and there,
always when you expect them least.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13  6:58                                     ` Georg Bauhaus
@ 2015-05-14  1:21                                       ` Randy Brukardt
  0 siblings, 0 replies; 107+ messages in thread
From: Randy Brukardt @ 2015-05-14  1:21 UTC (permalink / raw)


"Georg Bauhaus" <bauhaus@futureapps.invalid> wrote in message 
news:miusk8$ife$1@dont-email.me...
> On 13.05.15 00:37, Randy Brukardt wrote:
>> "Bob Duff" <bobduff@theworld.com> wrote in message
>> news:87egml511m.fsf@theworld.com...
>>> "Randy Brukardt" <randy@rrsoftware.com> writes:
>>>
>>>> I can see that are some cases where the properties are too expensive to
>>>> verify at runtime. It would be nice if there was a way to turn off 
>>>> those
>>>> (AND ONLY THOSE) properties. But Ada doesn't have that sort of
>>>> granularity,
>>>
>>> Sure it does.  If Is_Sorted is too slow for production use, you can say:
>>>
>>>     ... with Predicate => (if Slow_Mode then Is_Sorted(...))
>>>
>>> and set the Slow_Mode flag to True for testing.  Also set it to True
>>> when running proof tools.
>>
>> Of course. That's essentially what I've ("we've", really, Isaac created a
>> lot of the tracing stuff in Janus/Ada) been doing for years. I just 
>> hadn't
>> thought of trying to use it directly in the assertions. We'd use a 
>> function
>> call, though, rather than a constant:
>>
>>      ... with Dynamic_Predicate => (if JTrace.Trace(Current_Unit) then
>> Is_Sorted(...))
>
> Given this fine-grained run-time configuration (another IF and then
> a little something like  a debugging thing from an implementation),
> is the condition in the same category of expressions as Is_Sorted?
>
> The second, Is_Sorted, is strictly about the parameters, contractual,
> so to speak. The first looks rather different and distracting to me.

I think I made that point in my original message; this looks rather heavy 
and hurts the readability of the predicates. But that certainly could be 
reduced by naming and defaults.

  -- At the start of the package:
    Check : renames JTrace.Trace(Current_Unit);

      ... with Dynamic_Predicate => (if Check then Is_Sorted(...))

That certainly depends upon your needs; I typical use 3 or 4 levels of 
tracing crosscut with subsystem or package level control. That might be 
overkill for many, and perhaps Bob's direct solution would be enough for 
you.

                                Randy.




^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 11:53                                       ` G.B.
  2015-05-13 12:47                                         ` Dmitry A. Kazakov
@ 2015-05-14  1:32                                         ` Randy Brukardt
  2015-05-14  7:19                                           ` Dmitry A. Kazakov
  1 sibling, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-14  1:32 UTC (permalink / raw)


"G.B." <bauhaus@futureapps.invalid> wrote in message 
news:mivdul$bh9$1@dont-email.me...
...
> when there is only this place?
>
>   Binary_Search (It : Num; Data : Ary)
>     with Pre => Is_Sorted (Ary);
>
> vs
>
>   type Sorted_Array is private
>      with Type_Invariant => Is_Sorted (Sorted_Array);
>
>   Binary_Search (It : Num; Data : Sorted_Ary);

I'd prefer to use predicates and subtypes to describe this (since Dmitry 
denies the value of subtypes, I'm not surprised that he'd use an invariant 
instead. Specifically:

   subtype Sorted_Array is Ary
      with Dynamic_Predicate => Is_Sorted (Sorted_Array);

   function Binary_Search (It : Num; Data : Sorted_Array) return Boolean;

Note that there is no precondition here; it's embodied in the subtypes. 
That's typical for Ada code going back to the beginning of Ada time; it 
makes sense to expand upon it. After all, no one is writing Pre => It in 
Num'range, even though that clearly part of the precondition as well.

With this specification, you can pass in an object of type Ary that you 
don't know is sorted (and get a check on the call, just like the check on 
Num), or you can pass in an object of subtype Sorted_Array and push the 
check to elsewhere. I prefer the latter, because no one should be assuming a 
critical property like Is_Sorted for the binary search; it ought to be 
declared elsewhere that it is required on the parameter, else you have a 
maintenance hazard (maintainer doesn't know about the Is_Sorted requirement, 
so they change the array in some way that the check is no longer true. 
Problems ensue.)

                                       Randy.






^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-13 19:15                                                 ` Dmitry A. Kazakov
@ 2015-05-14  1:36                                                   ` Randy Brukardt
  2015-05-14  7:10                                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 107+ messages in thread
From: Randy Brukardt @ 2015-05-14  1:36 UTC (permalink / raw)



"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:5d5k4n9blbfl.hx4jw5yqa1o4.dlg@40tude.net...
...
> I remind you earlier discussions on whether Positive is a type and about
> merits of LSP. Each time you change anything, you create a *new* type. 
> Thus
> when you constraint the set of values in an operation, e.g. by putting a
> precondition, you create a *new* type that carries this constraint. If you
> keep on pretending this new type same to the original type, you depart
> strong typing and go to weak typing or no typing. And you will be dearly
> punished for that by debugging countless exceptions popping here and 
> there,
> always when you expect them least.

Ergo, Dmitry has a truly bizarre world-view. Not much to be learned from 
that, sadly.

Positive is not a type, and never will be a type (from an Ada world-view, 
which is all we're talking about here). Makes no sense whatsoever. And it 
doesn't help anyone to ignore reality.

                           Randy.


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-14  1:36                                                   ` Randy Brukardt
@ 2015-05-14  7:10                                                     ` Dmitry A. Kazakov
  0 siblings, 0 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-14  7:10 UTC (permalink / raw)


On Wed, 13 May 2015 20:36:41 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:5d5k4n9blbfl.hx4jw5yqa1o4.dlg@40tude.net...
> ...
>> I remind you earlier discussions on whether Positive is a type and about
>> merits of LSP. Each time you change anything, you create a *new* type. 
>> Thus
>> when you constraint the set of values in an operation, e.g. by putting a
>> precondition, you create a *new* type that carries this constraint. If you
>> keep on pretending this new type same to the original type, you depart
>> strong typing and go to weak typing or no typing. And you will be dearly
>> punished for that by debugging countless exceptions popping here and 
>> there, always when you expect them least.
> 
> Ergo, Dmitry has a truly bizarre world-view. Not much to be learned from 
> that, sadly.
> 
> Positive is not a type, and never will be a type (from an Ada world-view, 
> which is all we're talking about here). Makes no sense whatsoever. And it 
> doesn't help anyone to ignore reality.

How do you know if it does? You consider useless any formal definitions and
conclusions deduced from (AKA theorems). Thus the only sense anything could
make (to you) would be observed practical effects. Not predicted ones,
since that would again be those damned theorems... Anyway, consequences are
plenty, see accessibility checks.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-14  1:32                                         ` Randy Brukardt
@ 2015-05-14  7:19                                           ` Dmitry A. Kazakov
  0 siblings, 0 replies; 107+ messages in thread
From: Dmitry A. Kazakov @ 2015-05-14  7:19 UTC (permalink / raw)


On Wed, 13 May 2015 20:32:36 -0500, Randy Brukardt wrote:

> I'd prefer to use predicates and subtypes to describe this (since Dmitry 
> denies the value of subtypes,

I never said that. My point is that Ada subtype is a type, related to its
base type. It is a type enough to change the behavior of some operations.
That's  it.

Ada subtypes as method of producing new types (specialization), is very
useful.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-12 11:25                                   ` Stefan.Lucks
  2015-05-12 18:44                                     ` Paul Rubin
@ 2015-05-18  9:49                                     ` Jacob Sparre Andersen
  2015-05-18 12:10                                       ` Stefan.Lucks
  1 sibling, 1 reply; 107+ messages in thread
From: Jacob Sparre Andersen @ 2015-05-18  9:49 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de wrote:

> Thirdly, even if it would be way too expensive to check this
> specification at run time, it may be possible to verify it statically
> at run time. But I can only find that out by writing the specification
> and then running my tool (the compiler or whatever). Again, it would
> be a terrible idea for the compiler to insert expensive checks just
> because the theorem prover failed to prove the condition.

Why?  Wouldn't it be better to have the check enabled by default, and
only disable it once profiling has shown that it is prohibitively
expensive?

I am aware that we currently don't have as fine-grained control of
assertions as that would require to work well, but I assume that this is
something that can be discussed with the ARG and the compiler vendors.

Greetings,

Jacob
-- 
There really was only one way to make a person unlearn something ...


^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-18  9:49                                     ` Jacob Sparre Andersen
@ 2015-05-18 12:10                                       ` Stefan.Lucks
  2015-05-19  7:46                                         ` Jacob Sparre Andersen
  0 siblings, 1 reply; 107+ messages in thread
From: Stefan.Lucks @ 2015-05-18 12:10 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1360 bytes --]

On Mon, 18 May 2015, Jacob Sparre Andersen wrote:

> Stefan.Lucks@uni-weimar.de wrote:
>
>> Thirdly, even if it would be way too expensive to check this
>> specification at run time, it may be possible to verify it statically
>> at run time. But I can only find that out by writing the specification
>> and then running my tool (the compiler or whatever). Again, it would
>> be a terrible idea for the compiler to insert expensive checks just
>> because the theorem prover failed to prove the condition.
>
> Why?  Wouldn't it be better to have the check enabled by default, and
> only disable it once profiling has shown that it is prohibitively
> expensive?

Well, as a very simmple example, consider this:

 	(for all I in 2 .. N-1 => (N mod I) /= 0)

You don't need a profiler to figure out that this is prohibitively slow 
for largish N, do you?

> I am aware that we currently don't have as fine-grained control of 
> assertions as that would require to work well, but I assume that this is 
> something that can be discussed with the ARG and the compiler vendors.

This is precisely my point!

Stefan

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-18 12:10                                       ` Stefan.Lucks
@ 2015-05-19  7:46                                         ` Jacob Sparre Andersen
  2015-06-09  7:55                                           ` Stefan.Lucks
  0 siblings, 1 reply; 107+ messages in thread
From: Jacob Sparre Andersen @ 2015-05-19  7:46 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de writes:
> On Mon, 18 May 2015, Jacob Sparre Andersen wrote:

> Well, as a very simmple example, consider this:
>
> 	(for all I in 2 .. N-1 => (N mod I) /= 0)
>
> You don't need a profiler to figure out that this is prohibitively
> slow for largish N, do you?

I don't need a profiler to estimate that it takes long time to execute,
but I need a profiler to see where the compiler can't eliminate it from
a critical path through static analysis.

>> I am aware that we currently don't have as fine-grained control of
>> assertions as that would require to work well, but I assume that this
>> is something that can be discussed with the ARG and the compiler
>> vendors.
>
> This is precisely my point!

Good.  I noticed an interesting proposal for an extension to the
assertion policy control in one of the posts in this thread.  I suppose
we should push to have the ARG accept this (or something similar).

Greetings,

Jacob
-- 
There really was only one way to make a person unlearn something ...

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-05-19  7:46                                         ` Jacob Sparre Andersen
@ 2015-06-09  7:55                                           ` Stefan.Lucks
  2015-06-09 12:02                                             ` G.B.
  0 siblings, 1 reply; 107+ messages in thread
From: Stefan.Lucks @ 2015-06-09  7:55 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 1112 bytes --]

On Tue, 19 May 2015, Jacob Sparre Andersen wrote:

> Stefan.Lucks@uni-weimar.de writes:
>>
>> 	(for all I in 2 .. N-1 => (N mod I) /= 0)
>>
> I don't need a profiler to estimate that it takes long time to execute,
> but I need a profiler to see where the compiler can't eliminate it from
> a critical path through static analysis.

Sorry, i forgot about this thread ... so my response is somewhat late.

If N is large enough, the compiler would have to eliminate that expression 
from all paths, not just from its critical path. (Think of N \approx 
2**60. Which comes first: your program passing or failing all test cases, 
or your retirement?)

> Good.  I noticed an interesting proposal for an extension to the
> assertion policy control in one of the posts in this thread.  I suppose
> we should push to have the ARG accept this (or something similar).

Agreed!

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-06-09  7:55                                           ` Stefan.Lucks
@ 2015-06-09 12:02                                             ` G.B.
  2015-06-09 17:16                                               ` Stefan.Lucks
  0 siblings, 1 reply; 107+ messages in thread
From: G.B. @ 2015-06-09 12:02 UTC (permalink / raw)


On 09.06.15 09:55, Stefan.Lucks@uni-weimar.de wrote:
> On Tue, 19 May 2015, Jacob Sparre Andersen wrote:
>
>> Stefan.Lucks@uni-weimar.de writes:
>>>
>>>     (for all I in 2 .. N-1 => (N mod I) /= 0)
>>>
>> I don't need a profiler to estimate that it takes long time to execute,
>> but I need a profiler to see where the compiler can't eliminate it from
>> a critical path through static analysis.
>
> Sorry, i forgot about this thread ... so my response is somewhat late.
>
> If N is large enough, the compiler would have to eliminate that
> expression from all paths, not just from its critical path. (Think of N
> \approx 2**60. Which comes first: your program passing or failing all
> test cases, or your retirement?)
>
>> Good.  I noticed an interesting proposal for an extension to the
>> assertion policy control in one of the posts in this thread.  I suppose
>> we should push to have the ARG accept this (or something similar).
>
> Agreed!

It has been suggested in some places that assertion control
should be exercised with the help of a Config package.

    with $Condition$ =>
        (if Config.Is_Expensive_Test_666 (N)
         then True
	else (for all I in 2 .. N-1 => (N mod I) /= 0));

While it was considered practical, maybe even charming because Config
is just Ada, allowing to choose configuration names, permitting
conditional conditions (cf. N above), it would inevitably introduce
idiosyncrasies and, more importantly IMO, shift attention to Config
stuff. The latter could be considered off-topic the more it reflects
a project's own vocabulary, which cannot be as universally familiar
as a quantified_expression.

Does Config belong in a contract?

Moreover, aspects that allow one to address the issue at the language
level, outside the contract, not using mere conventions, would be more
standard, and meet standard expectations. Somewhat like contracts always
define things and state when a thing does apply, and to what.

These aspects and the notion of configuration do not need
to exclude each other, if the words Configuration Pragma imply that.

^ permalink raw reply	[flat|nested] 107+ messages in thread

* Re: {Pre,Post}conditions and side effects
  2015-06-09 12:02                                             ` G.B.
@ 2015-06-09 17:16                                               ` Stefan.Lucks
  0 siblings, 0 replies; 107+ messages in thread
From: Stefan.Lucks @ 2015-06-09 17:16 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 1901 bytes --]

On Tue, 9 Jun 2015, G.B. wrote:

> It has been suggested in some places that assertion control
> should be exercised with the help of a Config package.
>
>   with $Condition$ =>
>       (if Config.Is_Expensive_Test_666 (N)
>        then True
> 	else (for all I in 2 .. N-1 => (N mod I) /= 0));

On one hand, it is cool that the syntax for contracts is sufficiently 
flexible to allow this kind of thing, On the other hand, this is a plain 
abuse of the syntax for contracts, and it significantly decreases the 
contract's readability, because the real meaning is just

    $Condition$ =>
       (for all I in 2 .. N-1 => (N mod I) /= 0)

while the rest *could* be misunderstood as "under certain circumstances,
the above condition doesn't need to hold", rather than "this condition 
must always hold, even though it is not always checked or verified".

> While it was considered practical, maybe even charming because Config
> is just Ada, allowing to choose configuration names, permitting
> conditional conditions (cf. N above), it would inevitably introduce
> idiosyncrasies and, more importantly IMO, shift attention to Config
> stuff.

Indeed!

A somewhat lightweigt extension of the Ada 2012 syntax could allow 
$Condition$(Boolean_Expression), with $Condition$(True) being the same 
as $Condition$, e.g.:

   pre(False) => -- require this, but never check
     (for all I in 2 .. N-1 => (N mod I) /= 0),

   pre(Testing_Level >= Minimum ) =>
    ((N mod 2 /= 0) and then (N mod 3 /= 0));

   pre(Testing_Level >= High) => -- checked during nightly tests
     (for all I in 5 .. Num_Type'Min(2**32, N-1) => (N mod I) /= 0)),



------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
uni-weimar.de/de/medien/professuren/mediensicherheit/people/stefan-lucks
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 107+ messages in thread

end of thread, other threads:[~2015-06-09 17:16 UTC | newest]

Thread overview: 107+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-22 16:22 {Pre,Post}conditions and side effects Jean François Martinez
2014-12-22 17:18 ` Brad Moore
2014-12-23  8:22   ` Jean François Martinez
2014-12-23 17:05     ` Robert A Duff
2014-12-23 21:09       ` Jean François Martinez
2014-12-23 21:35         ` Robert A Duff
2014-12-23 23:02         ` Peter Chapin
2014-12-24  1:03           ` Robert A Duff
2015-04-24  8:59             ` Jacob Sparre Andersen
2015-04-24  9:18               ` J-P. Rosen
2015-04-24 23:39                 ` Randy Brukardt
2015-04-24 12:10               ` G.B.
2015-04-24 14:40                 ` Jacob Sparre Andersen
2015-04-24 16:29                   ` G.B.
2015-04-24 23:46                   ` Randy Brukardt
2015-04-24 22:26               ` Peter Chapin
2015-04-25  0:13                 ` Randy Brukardt
2015-04-25  1:01                   ` Peter Chapin
2015-04-25  5:51                 ` Dmitry A. Kazakov
2015-04-25  0:31               ` Bob Duff
2015-04-25 12:08                 ` vincent.diemunsch
2015-04-25 16:37                   ` Georg Bauhaus
2015-05-06 21:07                   ` Randy Brukardt
2015-05-06 22:10                     ` Paul Rubin
2015-05-07  9:01                     ` Georg Bauhaus
2015-05-07  9:12                       ` Dmitry A. Kazakov
2015-05-07  9:29                         ` Georg Bauhaus
2015-05-07  9:31                           ` Georg Bauhaus
2015-05-07 18:32                         ` Randy Brukardt
2015-05-08  7:50                           ` Dmitry A. Kazakov
2015-05-08 23:31                             ` Randy Brukardt
2015-05-09  6:16                               ` Dmitry A. Kazakov
2015-05-12  0:28                                 ` Randy Brukardt
2015-05-12  8:04                                   ` Dmitry A. Kazakov
2015-05-07 10:06                     ` Stefan.Lucks
2015-05-07 12:16                       ` Dmitry A. Kazakov
2015-05-07 18:00                         ` Stefan.Lucks
2015-05-07 19:01                           ` Randy Brukardt
2015-05-07 19:29                             ` Niklas Holsti
2015-05-08 23:16                               ` Randy Brukardt
2015-05-09  5:18                                 ` Niklas Holsti
2015-05-12  0:15                                   ` Randy Brukardt
2015-05-07 19:55                             ` Dmitry A. Kazakov
2015-05-08 23:24                               ` Randy Brukardt
2015-05-09  5:47                                 ` Dmitry A. Kazakov
2015-05-07 18:52                       ` Randy Brukardt
2015-05-07 19:40                         ` Stefan.Lucks
2015-05-08  7:28                           ` Dmitry A. Kazakov
2015-05-08 22:58                             ` Randy Brukardt
2015-05-08 22:52                           ` Randy Brukardt
2015-05-09  0:14                             ` Paul Rubin
2015-05-12  0:30                               ` Randy Brukardt
2015-05-12 18:10                                 ` Paul Rubin
2015-05-12 22:01                                   ` Randy Brukardt
2015-05-13  9:35                                     ` Dmitry A. Kazakov
2015-05-13 11:53                                       ` G.B.
2015-05-13 12:47                                         ` Dmitry A. Kazakov
2015-05-13 14:06                                           ` G.B.
2015-05-13 14:21                                             ` Dmitry A. Kazakov
2015-05-13 16:33                                               ` G.B.
2015-05-13 19:15                                                 ` Dmitry A. Kazakov
2015-05-14  1:36                                                   ` Randy Brukardt
2015-05-14  7:10                                                     ` Dmitry A. Kazakov
2015-05-14  1:32                                         ` Randy Brukardt
2015-05-14  7:19                                           ` Dmitry A. Kazakov
2015-05-12  0:36                               ` Randy Brukardt
2015-05-11 10:35                             ` Stefan.Lucks
2015-05-11 21:49                               ` vincent.diemunsch
2015-05-11 22:49                                 ` Peter Chapin
2015-05-12  4:49                                   ` vincent.diemunsch
2015-05-12 23:25                                     ` Peter Chapin
2015-05-13  9:00                                       ` vincent.diemunsch
2015-05-12  4:42                                 ` vincent.diemunsch
2015-05-12 14:53                                   ` johnscpg
2015-05-13  9:14                                     ` vincent.diemunsch
2015-05-12  1:03                               ` Randy Brukardt
2015-05-12  7:21                                 ` Georg Bauhaus
2015-05-12 22:08                                   ` Randy Brukardt
2015-05-12  8:02                                 ` Georg Bauhaus
2015-05-12 22:14                                   ` Randy Brukardt
2015-05-12  8:37                                 ` Stefan.Lucks
2015-05-12 11:25                                   ` Stefan.Lucks
2015-05-12 18:44                                     ` Paul Rubin
2015-05-12 20:52                                       ` Stefan.Lucks
2015-05-18  9:49                                     ` Jacob Sparre Andersen
2015-05-18 12:10                                       ` Stefan.Lucks
2015-05-19  7:46                                         ` Jacob Sparre Andersen
2015-06-09  7:55                                           ` Stefan.Lucks
2015-06-09 12:02                                             ` G.B.
2015-06-09 17:16                                               ` Stefan.Lucks
2015-05-12 18:39                                   ` Paul Rubin
2015-05-12 20:51                                     ` Stefan.Lucks
2015-05-12 14:21                                 ` Bob Duff
2015-05-12 22:37                                   ` Randy Brukardt
2015-05-13  6:58                                     ` Georg Bauhaus
2015-05-14  1:21                                       ` Randy Brukardt
2015-05-07 21:29                         ` Georg Bauhaus
2015-05-08 23:11                           ` Randy Brukardt
2015-05-08 23:19                             ` tmoran
2014-12-23 21:53   ` Florian Weimer
2014-12-24 11:41     ` Brad Moore
2014-12-22 19:38 ` sbelmont700
2014-12-22 19:59   ` Brad Moore
2014-12-22 23:46   ` Randy Brukardt
2014-12-23 10:41   ` Georg Bauhaus
2014-12-22 23:32 ` Randy Brukardt
2015-04-24 17:59   ` Shark8

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox