*New to Spark, working an example@ 2018-12-16 5:43 addaon2018-12-16 9:48 ` Simon Wright 0 siblings, 1 reply; 5+ messages in thread From: addaon @ 2018-12-16 5:43 UTC (permalink / raw) Folks, new to this list, so not quite sure on etiquette. I've been trying to understand Spark-2014 well enough to work through an example, and understand the capabilities and workflow of the tools. The example I chose was an example of floor(lg(n)) for n positive. Rather than put a long post here, I'll refer to my (long) post at stackoverflow: https://stackoverflow.com/questions/53752715/proving-floor-log2-in-spark. (If this is bad etiquette here, let me know, and I'll fix -- but it does seem a bit silly to duplicate the content in two locations.) Since SO seems to have a very limited Ada/Spark community, I'm hoping someone here can point me in the right direction. Basically, trying to understand what tools I should be trying to understand at this point. :-) Should I be looking at proving this with just a better understanding of how to write loop invariants; through appropriate lemmas; through an external prover like Coq; or something else? Thanks! ^ permalink raw reply [flat|nested] 5+ messages in thread

*2018-12-16 5:43 New to Spark, working an example addaonRe: New to Spark, working an example@ 2018-12-16 9:48 ` Simon Wright2018-12-19 2:41 ` Brad Moore 0 siblings, 1 reply; 5+ messages in thread From: Simon Wright @ 2018-12-16 9:48 UTC (permalink / raw) I don't think there's anything wrong with trying to attract attention (what gets my goat a bit is people posting the same question in both places at the same time). I have to confess that I hadn't set up my SO account to watch the tags [spark-2014] or [spark-ada] (why both?), or even [gnat] or [ada2012] - rectified. You would have got more views if you'd included [ada] (but not necessarily any (more) answers :) Your problems are an indication of why I, as a person who has no access to professional SPARK support, haven't invested any effort to speak of in SPARK (my difficulties were with tasking/time rather than mathematical loops, which tend to be rare in control systems). That said, it looks to me as though the version of gnatprove in GNAT CE 2018 may not fully understand exponentiation: util.ads:3:14: medium: postcondition might fail, cannot prove 2 ** Floor_Log2'Result <= X util.ads:3:16: medium: overflow check might fail (e.g. when Floor_Log2'Result = 0) ^ permalink raw reply [flat|nested] 5+ messages in thread

*Re: New to Spark, working an example2018-12-16 9:48 ` Simon Wright@ 2018-12-19 2:41 ` Brad Moore2018-12-19 16:58 ` Simon Wright 0 siblings, 1 reply; 5+ messages in thread From: Brad Moore @ 2018-12-19 2:41 UTC (permalink / raw) On Sunday, December 16, 2018 at 2:48:19 AM UTC-7, Simon Wright wrote: > I don't think there's anything wrong with trying to attract attention > (what gets my goat a bit is people posting the same question in both > places at the same time). > > I have to confess that I hadn't set up my SO account to watch the tags > [spark-2014] or [spark-ada] (why both?), or even [gnat] or [ada2012] - > rectified. You would have got more views if you'd included [ada] (but > not necessarily any (more) answers :) > > Your problems are an indication of why I, as a person who has no access > to professional SPARK support, haven't invested any effort to speak of > in SPARK (my difficulties were with tasking/time rather than > mathematical loops, which tend to be rare in control systems). > > That said, it looks to me as though the version of gnatprove in GNAT CE > 2018 may not fully understand exponentiation: > > util.ads:3:14: medium: postcondition might fail, cannot prove 2 ** Floor_Log2'Result <= X > util.ads:3:16: medium: overflow check might fail (e.g. when Floor_Log2'Result = 0) I am by no means a SPARK expert, but I am also interested in exploring SPARK capabilities. My approach led me to the following solution using just the SPARK 2018 GPL download from Adacore.... (no extra provers were needed here, other than the ones that come with GNAT CE 2018) As an aside, it appears the version of gnatprove in GNAT CE 2018 does have a pretty good understanding of exponentiation, given that I was able to get the following proven. package Util with SPARK_Mode is Max_Log2 : constant := Positive'Size - 1; subtype Log_Result is Natural range 0 .. Max_Log2; function Floor_Log2 (X : Positive) return Log_Result with Global => null, Depends => (Floor_Log2'Result => X), Post => X >= 2**Floor_Log2'Result and then X / 2 < 2**Floor_Log2'Result; end Util; pragma Ada_2012; package body Util with SPARK_Mode is function Floor_Log2 (X : Positive) return Log_Result is begin -- Floor_Log2 Log_Loop : for I in Log_Result loop pragma Loop_Invariant (for all J in 0 .. I => X >= 2**J); pragma Assert(X / 2 < 2**Log_Result'Last); if X / 2 < 2**I then pragma Assert (X >= 2**I); pragma Assert (X / 2 < 2**I); return I; end if; pragma Assume(I /= Log_Result'Last); end loop Log_Loop; return Log_Result'Last; end Floor_Log2; end Util; I technically didn't need to use the Global aspect or the Depends Aspect to prove this function, but I think it is a good idea to provide a more detailed contract using additional SPARK and Ada features, when possible. The approach I took is to first of all make use of Ada 2012 contracts to constrain the results to only allow valid values. The Log_Result subtype only includes valid result values. I think this is an important goal in general to eliminate bugs, whether writing code for regular Ada as well as SPARK. My view is that in general, types such as Integer and Float should not be used since they are types that describe memory storage, not types that describe values of interest in the application domain. By creating types that more accurately represent the application domain, I believe it makes the job of writing proofs in SPARK much easier, since the prover can reason that the values assigned to such values have specific value ranges and properties. Another point, is to try to write an implementation that is easier to prove. For that reason, I wrote this is a for loop rather than a while loop, because the compiler can reason statically about how many iterations are performed, and what the values of the loop parameters can be. The prover was able to prove all the assertions in the implementation. I had to leave in one assumption, (the pragma assume), pragma Assume(I /= Log_Result'Last); Without that, the prover complains that the post condition, X / 2 < 2**Floor_Log2'Result cannot be proven. It appears that the prover is not able to prove that the loop exited by the return statement, rather than iterating the full loop and exiting the loop without entering the if statement. However, I think this can be visually inspected and confirmed to be true, since the assert for the if statment, pragma Assert(X / 2 < 2**Log_Result'Last); just prior to the if statement was proven. It follows that if the assertion is true, then the if statement would have to be entered on the following line, and that the return would exit the loop. Thus, the reader should be able to visually tell that it is impossible to get by the if statement when I = Log_Result'Last, and thus the pragma Assume is true. The return at the end of the function should never get executed, as the only way to exit the function is via the return inside the loop. I didn't need to have the return inside the loop for the purpose of proving the function. I just did that to eliminate the need of extra variable declarations. Probably the prover could be improved so that such an assume could be eliminated while still proving the overall function. There may be a way to add additional asserts or pragmas to eliminate the need for the pragma Assume. So far I haven't found any, but perhaps someone else might come up with a way. Otherwise, I'm pretty happy with the solution I ended up with, given that the one assume in the code can be visually checked easily for correctness. I am sure that other SPARK solutions exist. I think when it comes to proving something, it is better to start with something simple, and to have in mind choosing an implementation that is easier to prove. This should make it easier to arrive at a proof. Brad ^ permalink raw reply [flat|nested] 5+ messages in thread

*Re: New to Spark, working an example2018-12-19 2:41 ` Brad Moore@ 2018-12-19 16:58 ` Simon Wright2018-12-20 4:34 ` Brad Moore 0 siblings, 1 reply; 5+ messages in thread From: Simon Wright @ 2018-12-19 16:58 UTC (permalink / raw) Brad Moore <bmoore.ada@gmail.com> writes: > On Sunday, December 16, 2018 at 2:48:19 AM UTC-7, Simon Wright wrote: >> util.ads:3:16: medium: overflow check might fail (e.g. when >> Floor_Log2'Result = 0) > As an aside, it appears the version of gnatprove in GNAT CE 2018 does > have a pretty good understanding of exponentiation, given that I was > able to get the following proven. Apparently so. But the part of gnatprove that gives examples of when the assertion might fail is quite misleading: for example, util.ads:7:14: medium: postcondition might fail, cannot prove 2 ** Floor_Log2'Result <= X (e.g. when Floor_Log2'Result = 0 and X = 0) *when X is Positive* !! and util.adb:19:15: medium: overflow check might fail (e.g. when I = 0) l.18 for I in 1 .. Log_Result'Last loop l.19 if 2 ** I > X then ^ permalink raw reply [flat|nested] 5+ messages in thread

*Re: New to Spark, working an example2018-12-19 16:58 ` Simon Wright@ 2018-12-20 4:34 ` Brad Moore0 siblings, 0 replies; 5+ messages in thread From: Brad Moore @ 2018-12-20 4:34 UTC (permalink / raw) On Wednesday, December 19, 2018 at 9:58:44 AM UTC-7, Simon Wright wrote: > Brad Moore <bmoore.ada@gmail.com> writes: > > > On Sunday, December 16, 2018 at 2:48:19 AM UTC-7, Simon Wright wrote: > > >> util.ads:3:16: medium: overflow check might fail (e.g. when > >> Floor_Log2'Result = 0) > > > As an aside, it appears the version of gnatprove in GNAT CE 2018 does > > have a pretty good understanding of exponentiation, given that I was > > able to get the following proven. > > Apparently so. But the part of gnatprove that gives examples of when the > assertion might fail is quite misleading: for example, > > util.ads:7:14: medium: postcondition might fail, cannot prove 2 ** > Floor_Log2'Result <= X (e.g. when Floor_Log2'Result = 0 and X = 0) > > *when X is Positive* !! and > > util.adb:19:15: medium: overflow check might fail (e.g. when I = 0) > > l.18 for I in 1 .. Log_Result'Last loop > l.19 if 2 ** I > X then I agree that the error messages are misleading, as I was getting similar messages when when I was working on this. While the values "0" mentioned in the error messages were confusing to me, I think the messages were helpful at least in suggesting the sort of tests the prover was trying to prove, which ultimately helped me figure out the assertions that were needed to get this to pass. The values given can be a bit of a red herring sometimes, but I think the underlying test described by the message is more helpful. This is my second problem that I attempted to prove in SPARK, so I didn't know if I would succeed, or know much about how to approach this. Its kind of a rewarding feeling when you get the prover to pass. One suggestion I have to prove post conditions, is to state the post condition as an assert before returning from the subprogram, and work backwards from there. Brad ^ permalink raw reply [flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-12-20 4:34 UTC | newest]Thread overview:5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-12-16 5:43 New to Spark, working an example addaon 2018-12-16 9:48 ` Simon Wright 2018-12-19 2:41 ` Brad Moore 2018-12-19 16:58 ` Simon Wright 2018-12-20 4:34 ` Brad Moore

This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox