comp.lang.ada
 help / color / mirror / Atom feed
* SPARK
@ 2001-08-06 16:49 programmer
  2001-08-07  7:04 ` SPARK Hambut
                   ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: programmer @ 2001-08-06 16:49 UTC (permalink / raw)


I have been to various web sites like www.sparkada.com in an effort
to research the SPARK language.

Is SPARK Examiner the only SPARK language tool available?

What is the cost?

Are there any open source alternatives?

I am fascinated by the concept of verifying correctness
completely at compiler/pre-compile time.

Answers to my questions and any additional information
would be greatly appreciated.

Thanks.





^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-06 16:49 SPARK programmer
@ 2001-08-07  7:04 ` Hambut
  2001-08-07  7:18 ` SPARK Hambut
  2001-08-07  8:37 ` SPARK Peter Amey
  2 siblings, 0 replies; 61+ messages in thread
From: Hambut @ 2001-08-07  7:04 UTC (permalink / raw)


"programmer" <nospam@nowhere.com> wrote in message news:<UUzb7.42923$uj2.7462172@news02.optonline.net>...
> I have been to various web sites like www.sparkada.com in an effort
> to research the SPARK language.
> 
> Is SPARK Examiner the only SPARK language tool available?

Yes.

> 
> What is the cost?

Well you certainly couldn't contemplate buying it personally.  Perhaps
one of the readers from Praxis might want to comment.

You can get a restricted version in the back to the SPARK book by John
Barnes (Unfortunately I don't have it to hand, and so can't give the
correct title).  I think it's restricted with respect to size of
software it will analyse,and to the level of analysis it carried out.

> 
> Are there any open source alternatives?

Not that I'm aware of.  Now there's an idea for a useful tool someone
could develop as open source........

<snip>



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-06 16:49 SPARK programmer
  2001-08-07  7:04 ` SPARK Hambut
@ 2001-08-07  7:18 ` Hambut
  2001-08-07  8:37 ` SPARK Peter Amey
  2 siblings, 0 replies; 61+ messages in thread
From: Hambut @ 2001-08-07  7:18 UTC (permalink / raw)


Here's a link to John Barne's book "High Integrity Ada":

http://www.amazon.co.uk/exec/obidos/ASIN/0201175177/202-2966724-2375825



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-06 16:49 SPARK programmer
  2001-08-07  7:04 ` SPARK Hambut
  2001-08-07  7:18 ` SPARK Hambut
@ 2001-08-07  8:37 ` Peter Amey
  2001-08-07 14:42   ` SPARK McDoobie
                     ` (2 more replies)
  2 siblings, 3 replies; 61+ messages in thread
From: Peter Amey @ 2001-08-07  8:37 UTC (permalink / raw)




programmer wrote:
> 
> I have been to various web sites like www.sparkada.com in an effort
> to research the SPARK language.
> 
> Is SPARK Examiner the only SPARK language tool available?
> 
> What is the cost?
> 
> Are there any open source alternatives?
> 
> I am fascinated by the concept of verifying correctness
> completely at compiler/pre-compile time.
> 
> Answers to my questions and any additional information
> would be greatly appreciated.


Thanks for the interest (and to Hambut who provided some accurate
answers).

There are two aspects to SPARK: the language definition and the analysis
tool (the Examiner).  The ingredients are complementary because it the
tightness of the language definition which permits deep analysis to be
performed in computationally efficient ways.  It also allows the
analysis to be performed on incomplete programs, during development,
which is vital if the approach is to save time and money.

I often characterize SPARK as an "Ada amplifier".  If you think of all
the things like strong typing, constraint checking etc. that makes Ada
efficient at error prevention and early error detection then add in a
whole raft of extra checks ranging from system wide data flow analysis
up to proof of exception freedom then that is SPARK.  (Incidently, to
pick up on another thread in CLA, proof of exception freedom would also
be a proof of invulnerability to buffer overflow attacks without any
run-time overhead.)

The language definition is freely available as is the paper that lays
the foundations of the Examiner's method of data and information flow
analysis.  So, if you are really keen, much of the requirements
specification for an open source alternative to the Examiner exists. 
You should be warned, however, that I have spent most of the last 10
years of my life, with some very high quality help, writing and
improving the Examiner (which BTW is written in SPARK and analyses and
approves of itself) so the task is not one for the faint hearted.

The Barnes book has recently been updated and the publishers have not
done a very good job of telling people about it: if you try and get a
copy and are told it is out of print then tell them they are wrong!  The
demo Examiner is limited only in size; all analysis options are
available.

Finally, those whose appetites have been whetted may like to know that
we are running an open SPARK course in the USA from 24th - 28th
September.  Details on www.sparkada.com

Peter


-- 
---------------------------------------------------------------------------   
      __         Peter Amey,  
        )                    Praxis Critical Systems Ltd
       /                     20, Manvers Street, Bath, BA1 1PX
      / 0        Tel: +44 (0)1225 466991
     (_/         Fax: +44 (0)1225 469006
                 http://www.praxis-cs.co.uk/

--------------------------------------------------------------------------



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-07  8:37 ` SPARK Peter Amey
@ 2001-08-07 14:42   ` McDoobie
  2001-08-09 12:36   ` SPARK Peter Amey
  2001-08-14  3:14   ` SPARK Prof Karl Kleine
  2 siblings, 0 replies; 61+ messages in thread
From: McDoobie @ 2001-08-07 14:42 UTC (permalink / raw)


Peter Amey wrote:
> 
> programmer wrote:
> 
>>I have been to various web sites like www.sparkada.com in an effort
>>to research the SPARK language.
>>
>>Is SPARK Examiner the only SPARK language tool available?
>>
>>What is the cost?
>>
>>Are there any open source alternatives?
>>
>>I am fascinated by the concept of verifying correctness
>>completely at compiler/pre-compile time.
>>
>>Answers to my questions and any additional information
>>would be greatly appreciated.
>>
> 
> 
> Thanks for the interest (and to Hambut who provided some accurate
> answers).
> 
> There are two aspects to SPARK: the language definition and the analysis
> tool (the Examiner).  The ingredients are complementary because it the
> tightness of the language definition which permits deep analysis to be
> performed in computationally efficient ways.  It also allows the
> analysis to be performed on incomplete programs, during development,
> which is vital if the approach is to save time and money.
> 
> I often characterize SPARK as an "Ada amplifier".  If you think of all
> the things like strong typing, constraint checking etc. that makes Ada
> efficient at error prevention and early error detection then add in a
> whole raft of extra checks ranging from system wide data flow analysis
> up to proof of exception freedom then that is SPARK.  (Incidently, to
> pick up on another thread in CLA, proof of exception freedom would also
> be a proof of invulnerability to buffer overflow attacks without any
> run-time overhead.)
> 
> The language definition is freely available as is the paper that lays
> the foundations of the Examiner's method of data and information flow
> analysis.  So, if you are really keen, much of the requirements
> specification for an open source alternative to the Examiner exists. 
> You should be warned, however, that I have spent most of the last 10
> years of my life, with some very high quality help, writing and
> improving the Examiner (which BTW is written in SPARK and analyses and
> approves of itself) so the task is not one for the faint hearted.
> 
> The Barnes book has recently been updated and the publishers have not
> done a very good job of telling people about it: if you try and get a
> copy and are told it is out of print then tell them they are wrong!  The
> demo Examiner is limited only in size; all analysis options are
> available.
> 
> Finally, those whose appetites have been whetted may like to know that
> we are running an open SPARK course in the USA from 24th - 28th
> September.  Details on www.sparkada.com
> 
> Peter
> 
> 
> 

This is something I'd certainly like to get my hands on. The problem is 
that it costs too damn much!

But, perhaps, instead of re-creating the entire Spark toolchain, one(or 
a group) could create a new toolchain, or an Open Source(or Free 
Software) subset of Spark geared around things that Open Source 
programmers use most often.
I'm sure Praxis could reap a bundle of they were to put together a 
package geared(and priced) towards individual developers, in addition to 
the huge Corporate and Military systems they work with right now. Even 
just a small, but carefully chosen subset of the Spark system, priced 
right, and sitting on the shelf at CompUSA would not only make them 
loads of money(assuming they market it properly), but would likely 
attract large numbers of developers to Ada in general. Not to mention 
making at least a small percentage of overall improvement in the types 
of software being developed today.

Just a thought. Maybe it's wishful thinking.

I would jump at the chance to develop a similar LGPLd package. The 
problem is that I dont have the amount of experience doing actual 
engineering that would be required to develop anything worthwhile. 
However, I'd certainly be glad to work under someone who does.

Thoughts? Possibilities? Potential?

What do the SPARK guys think?

McDoobie
chris@dont.spam.me





^ permalink raw reply	[flat|nested] 61+ messages in thread

* SPARK
@ 2001-08-08  9:46 Soeren.Henssel-Rasmussen
  2001-08-08 20:04 ` SPARK McDoobie
  0 siblings, 1 reply; 61+ messages in thread
From: Soeren.Henssel-Rasmussen @ 2001-08-08  9:46 UTC (permalink / raw)
  To: comp.lang.ada

Chris,

have a look on the Anna toolset from Stanford:
	Research website	=> http://pavg.stanford.edu/
	Download		=>
ftp://pavg.stanford.edu/pub/anna/anna.intro  (remove "anna.intro" to see all
files)

I worked on the tools a couple of years ago, but has not used it since.  

regards  /soren
Søren Henssel-Rasmussen 
soeren.henssel-rasmussen@nokia.com 




^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-08  9:46 SPARK Soeren.Henssel-Rasmussen
@ 2001-08-08 20:04 ` McDoobie
  0 siblings, 0 replies; 61+ messages in thread
From: McDoobie @ 2001-08-08 20:04 UTC (permalink / raw)


Soeren.Henssel-Rasmussen@nokia.com wrote:
> Chris,
> 
> have a look on the Anna toolset from Stanford:
> 	Research website	=> http://pavg.stanford.edu/
> 	Download		=>
> ftp://pavg.stanford.edu/pub/anna/anna.intro  (remove "anna.intro" to see all
> files)
> 
> I worked on the tools a couple of years ago, but has not used it since.  
> 
> regards  /soren
> S�ren Henssel-Rasmussen 
> soeren.henssel-rasmussen@nokia.com 
> 
> 

Thanks for the reference.  You can be sure I'll be spending some time 
getting used to these tools. Hopefully, as my experience grows, I'll be 
able to submit some contributions to the project.

McDoobie
chris@dont.spam.me




^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-07  8:37 ` SPARK Peter Amey
  2001-08-07 14:42   ` SPARK McDoobie
@ 2001-08-09 12:36   ` Peter Amey
  2001-08-14  3:14   ` SPARK Prof Karl Kleine
  2 siblings, 0 replies; 61+ messages in thread
From: Peter Amey @ 2001-08-09 12:36 UTC (permalink / raw)



I should, perhaps, have mentioned in my earlier reply that we do provide
fully-supported SPARK tool sets for academic use.  Details are on
sparkada.com.  I appreciate that this does not directly help
hobbyists/enthusiasts/very small projects; we will keep thinking of ways
to serve these potential users.

Peter



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-07  8:37 ` SPARK Peter Amey
  2001-08-07 14:42   ` SPARK McDoobie
  2001-08-09 12:36   ` SPARK Peter Amey
@ 2001-08-14  3:14   ` Prof Karl Kleine
  2001-08-14 10:25     ` SPARK Rod Chapman
  2 siblings, 1 reply; 61+ messages in thread
From: Prof Karl Kleine @ 2001-08-14  3:14 UTC (permalink / raw)


Peter Amey <peter.amey@praxis-cs.co.uk> wrote:
[.....]
> The Barnes book has recently been updated and the publishers have not
> done a very good job of telling people about it: if you try and get a
> copy and are told it is out of print then tell them they are wrong!  The
> demo Examiner is limited only in size; all analysis options are
> available.
[.....]

Peter, I just checked the 'SPARK Book' entry on the www.sparkada.com site
and there is an offer of an update of the demo software on the CDROM, but
it does not say anything about the update of the book itself. Yes, I do
own a private copy (of course, being a serious person :-).

Two questions:

 -- Is there an update paper for the book itself, or something to this effect?

 -- How do I recognize the updated book? New isbn? "second edition" in title? 

Thanks!
Karl Kleine



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2001-08-14  3:14   ` SPARK Prof Karl Kleine
@ 2001-08-14 10:25     ` Rod Chapman
  0 siblings, 0 replies; 61+ messages in thread
From: Rod Chapman @ 2001-08-14 10:25 UTC (permalink / raw)


Prof Karl Kleine <kleine@fh-jena.de> wrote in message news:<9la53h$k62$1@beta.szi.fh-jena.de>...
> Peter, I just checked the 'SPARK Book' entry on the www.sparkada.com site
> and there is an offer of an update of the demo software on the CDROM, but
> it does not say anything about the update of the book itself. Yes, I do
> own a private copy (of course, being a serious person :-).
> 
> Two questions:
> 
>  -- Is there an update paper for the book itself, or something to this effect?

Your best bet is to read the Examiner release 5 release note which is
on www.sparkada.com under "Downloads".

By request, we will also send anyone the full SPARK definition.  This
is less easy to read than the book, though (i.e. it's written
in "LRM-speak" rather than "Barnes-speak").

>  -- How do I recognize the updated book? New isbn? "second edition" in title? 

Strictly speaking, it is a second _printing_ (with updates) of the
first edition.  It has "Reprinted with revisions 2000" on the
copyright page,
and the CDROM in the back is labelled "Release 2.00 August 2000"

ISBN is the same.

Hope that helps,
 Rod Chapman
 SPARK Products Manager,
 Praxis Critical Systems
 rod@praxis-cs.co.uk



^ permalink raw reply	[flat|nested] 61+ messages in thread

* SPARK
  2004-08-26 14:02           ` timeouts Georg Bauhaus
@ 2004-08-26 23:03             ` Brian May
  2004-08-27 10:11               ` SPARK Georg Bauhaus
  0 siblings, 1 reply; 61+ messages in thread
From: Brian May @ 2004-08-26 23:03 UTC (permalink / raw)


>>>>> "Georg" == Georg Bauhaus <sb463ba@l1-hrz.uni-duisburg.de> writes:

    Georg> Steve <nospam_steved94@comcast.net> wrote:
    Georg> : Hmmm... Maybe if they re-implemented GNAT in SPARK ;-)

    Georg> Will that produce an Ada compiler with a number of capacity limits?

Dare I ask, what is SPARK?

I gather it is some tool to make Ada programs even more reliable?
-- 
Brian May <bam@snoopy.apana.org.au>



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2004-08-26 23:03             ` SPARK Brian May
@ 2004-08-27 10:11               ` Georg Bauhaus
  0 siblings, 0 replies; 61+ messages in thread
From: Georg Bauhaus @ 2004-08-27 10:11 UTC (permalink / raw)


Brian May <bam@snoopy.apana.org.au> wrote:
:>>>>> "Georg" == Georg Bauhaus <sb463ba@l1-hrz.uni-duisburg.de> writes:
: 
:    Georg> Steve <nospam_steved94@comcast.net> wrote:
:    Georg> : Hmmm... Maybe if they re-implemented GNAT in SPARK ;-)
: 
:    Georg> Will that produce an Ada compiler with a number of capacity limits?
: 
: Dare I ask, what is SPARK?
: 
: I gather it is some tool to make Ada programs even more reliable?

http://www.sparkada.com/


-- Georg



^ permalink raw reply	[flat|nested] 61+ messages in thread

* SPARK
@ 2009-06-10  9:47 Robert Matthews
  0 siblings, 0 replies; 61+ messages in thread
From: Robert Matthews @ 2009-06-10  9:47 UTC (permalink / raw)


I see there is a discussion of SPARK over at LWN:
http://lwn.net/Articles/336656/

Robert





^ permalink raw reply	[flat|nested] 61+ messages in thread

* SPARK
@ 2010-05-12 22:55 Yannick Duchêne (Hibou57)
  2010-05-13  0:52 ` SPARK Yannick Duchêne (Hibou57)
                   ` (9 more replies)
  0 siblings, 10 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-12 22:55 UTC (permalink / raw)


Hello,

Short title, “SPARK”, as short as a subset (joking)

I wanted to start some SPARK related questions, ... “GNAT packages in  
Linux” kept apart.


Well, sorry for that, my first question gonna be about license. The GPL  
applies to SPARK-GPL (oh? really?). How does it apply ? I'm dubious about  
it in this context, because using SPARK to check an implementation does  
not implies any kind of linking to any binary or library, and I did not  
see anywhere something stating there was license restrictions on the usage  
of SPARK annotations in Ada sources.

What are the concrete restrictions so ?



Hope this first question will get a short answer, so let us jump to the  
second question, which is intended to start with one major deal of SPARK  
for people who know/use Ada : Ada subset and restrictions (not to disclaim  
or disagree with, mostly to talk and understand more).

The Praxis reference for SPARK-95 says :

[Praxis SPARK95 3]
> SPADE-Pascal was developed essentially by excising from
> ISO-Pascal those features which were considered particularly
> “dangerous”, or which could give rise to intractable validation
> problems - such as variant records,
That said, variant record can be emulated by user or program too, as  
that's just what were doing many people when there was no language support  
for that. What is different when the Pascal or Ada variant record compared  
to a user or program doing explicitly the same ? May be the answer is just  
in one of the last three words : “explicitly” ?

Does this quotation from the Praxis's reference about SPARK suggests SPARK  
has some troubles with the implicit or the underlying ?

If SPARK could know about a formalization of what's going on with variant  
records, as explicit as would be the same made by the user or program,  
would that solve the trick ?

What would possibly make this difficult or non-trustable ?



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13  0:52 ` Yannick Duchêne (Hibou57)
  2010-05-13  3:06 ` SPARK Yannick Duchêne (Hibou57)
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13  0:52 UTC (permalink / raw)


Le Thu, 13 May 2010 00:55:02 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:
> Hope this first question will get a short answer, so let us jump to the  
> second question, which is intended to start with one major deal of SPARK  
> for people who know/use Ada : Ada subset and restrictions (not to  
> disclaim or disagree with, mostly to talk and understand more).

An important quote every one should know about ; especially the last  
sentence:

[Praxis SPARK 95 (5)]
> Some readers may be dismayed to see so many features of Ada removed,
> and feel that SPARK is “too small”. It is by no means the largest
> subset of Ada which would be amenable to analysis by the techniques
> employed in SPADE, but it is significantly larger than SPADE-Pascal,
> which has been found adequate for a substantial number of
> safety-critical projects. The additional features which appear in
> SPARK (such as packages and private types) make programming simpler
> and safer, rather than complicate the verification task. Of course,
> the extent to which Ada must be simplified for high-integrity
> programming will be a matter of opinion: our preoccupation with
> simplicity is based on experience of what can be achieved with a
> high degree of confidence in practice, rather than what can be
> proved in principle.


-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
  2010-05-13  0:52 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13  3:06 ` Yannick Duchêne (Hibou57)
  2010-05-13  9:28   ` SPARK stefan-lucks
  2010-05-14 22:55   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-13  4:00 ` SPARK Yannick Duchêne (Hibou57)
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13  3:06 UTC (permalink / raw)


Some quick though to feed the talk and give interested parties an  
opportunity to take part.

Although I can understand most of the restrictions SPARK applies (a big  
amount of them did not ever requires to use SPARK to be meaningful and  
beneficial and I do apply lot of them already), I still don't understand  
one : the restriction of not using derived type (except tagged type, which  
are supported by SPARK). What was the rational for that ?

Functions may have legitimate side effects, like for memoisation.  
Memoisation is a kind of optimization and optimizations should be  
transparent to clients. SPARK would requires to split it into two parts: a  
kind of “prepare/compute” procedure and a “get result” function ; the  
procedure being supposedly invoked prior to each function invokation. Did  
you ever face this ?

Block statements are not allowed in SPARK. Let say a procedure can do the  
same. Was this only to have one-matter-one-feature (avoid to have both  
block-statement and procedure, and keep only one) for the sake of  
simplicity or was there some other reasons properly dealing with  
verifiability ?

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
  2010-05-13  0:52 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-13  3:06 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13  4:00 ` Yannick Duchêne (Hibou57)
  2010-05-13 16:54 ` SPARK Yannick Duchêne (Hibou57)
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13  4:00 UTC (permalink / raw)


Le Thu, 13 May 2010 00:55:02 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:
> How does it apply ? I'm dubious about it in this context, because using  
> SPARK to check an implementation does not implies any kind of linking to  
> any binary or library, and I did not see anywhere something stating  
> there was license restrictions on the usage of SPARK annotations in Ada  
> sources.

At least the package SPARK_IO on Windows is MGPL :

[from SPARK_IO header]
-- As a special exception, if other files instantiate generics from this  
unit,
-- or you link this unit with other files to produce an executable, this  
unit
-- does not by itself cause the resulting executable to be covered by the  
GNU
-- General Public License. This exception does not however invalidate any  
other
-- reasons why the executable file might be covered by the GNU Public  
License.


-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13  3:06 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13  9:28   ` stefan-lucks
  2010-05-13 16:48     ` SPARK Yannick Duchêne (Hibou57)
  2010-05-14 22:55   ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 61+ messages in thread
From: stefan-lucks @ 2010-05-13  9:28 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1713 bytes --]

On Thu, 13 May 2010, Yannick Duch�ne (Hibou57) wrote:

> Functions may have legitimate side effects, like for memoisation. Memoisation
> is a kind of optimization and optimizations should be transparent to clients.
> SPARK would requires to split it into two parts: a kind of �prepare/compute�
> procedure and a �get result� function ; the procedure being supposedly invoked
> prior to each function invokation. Did you ever face this ?

When working with SPARK, it is difficult *not* to face that. E.g., 
usually, you would write a pseudorandom number generator as 

  function Rnd return T is
  begin
    Global_State := f(Global_State);
    return g(Global_State);
  end; 

In SPARK, you have to replace the function RND by a procedure, which 
delivers the result as an out parameter. 

Very inconvenient! But reasonable:

Note that the result from evaluating an expression such as "Rnd + Rnd**2" 
depends on the order of evaluation, which isn't specified in the Ada 
standard. If we write I for the initial global state, then "Rnd + Rnd**2" 
can either evaluate to

  f(g(I) + f(f(g(I)))**2,

or to

  f(f(g(I))) + f(g(I))**2.

Such an unspecified behaviour makes static analysis very difficult. In 
principle, you would have to consider all possible orders of evaluation 
and check if something is going wrong. That would be very hard for SPARK, 
and, with a few legitimate exceptions, such that pseodorandom numbers, 
this is bad programming style anyway. Thus, SPARK prohibits this. 

-- 
------ Stefan Lucks   --  Bauhaus-University Weimar  --   Germany  ------
               Stefan dot Lucks at uni minus weimar dot de
------  I  love  the  taste  of  Cryptanalysis  in  the  morning!  ------

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13  9:28   ` SPARK stefan-lucks
@ 2010-05-13 16:48     ` Yannick Duchêne (Hibou57)
  2010-05-15 13:09       ` SPARK Peter C. Chapin
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13 16:48 UTC (permalink / raw)


Le Thu, 13 May 2010 11:28:03 +0200, <stefan-lucks@see-the.signature> a  
écrit:
> When working with SPARK, it is difficult *not* to face that. E.g.,
> usually, you would write a pseudorandom number generator as
>
>   function Rnd return T is
>   begin
>     Global_State := f(Global_State);
>     return g(Global_State);
>   end;
>
> In SPARK, you have to replace the function RND by a procedure, which
> delivers the result as an out parameter.

Yes, an example from Barnes gives the same:
Quote from  
http://people.cis.ksu.edu/~hatcliff/890-High-Assurance/Slides/Barnes-Ch-02-Language-Principles.pdf

    package Random_Numbers
    --# own Seed;
    --# initializes Seed;
    is

       procedure Random(X: out Float);
       --# global in out Seed;
       --# derives X, Seed from Seed;

    end Random_Numbers;

    package body Random_Numbers is

       Seed: Integer;
       Seed_Max: constant Integer := … ;

       procedure Random(X: out Float) is
       begin
          Seed := … ;
          X := Float(Seed) / Float(Seed_Max);
       end Random;

    begin -- initialization part
       Seed := 12345;
    end Random_Numbers


> Very inconvenient! But reasonable:

For that purpose, yes, as a radom generator is not a pure function.

> Note that the result from evaluating an expression such as "Rnd + Rnd**2"
> depends on the order of evaluation, which isn't specified in the Ada
> standard. If we write I for the initial global state, then "Rnd + Rnd**2"
> can either evaluate to
>
>   f(g(I) + f(f(g(I)))**2,
>
> or to
>
>   f(f(g(I))) + f(g(I))**2.
A function optimized (which save computing of already computed inputs)  
using memoisation, does not exposes this behavior.

> Such an unspecified behaviour makes static analysis very difficult. In
> principle, you would have to consider all possible orders of evaluation
> and check if something is going wrong. That would be very hard for SPARK,
> and, with a few legitimate exceptions, such that pseodorandom numbers,
> this is bad programming style anyway. Thus, SPARK prohibits this.
Indeed, there should not be any attempt to make proof on a language which  
would allow function whose result depends on the invokation time-stamp,  
and so whose result may depend on evaluation order.

Functions using memoisation are different case and are still pure  
function. What change between each invokation, is not the result, just the  
time to compute it (which may be shorter for some next invokation).

How to make it formal : may be giving the proof the function is the only  
one to access memoisation storage between function invokation, so, that  
this data are mechanically private to the function. Then, demonstrate  
there is two paths to the result : one which is the basic computation and  
one which retrieve the result of a similar previous computation. This  
latter step could rely on the computation of a key to access this data.  
Then demonstrate that for a given input, the computed key is always the  
same (input <-> key association). Now, the hard part may be to demonstrate  
the key is used to access the result which was computed previously for the  
same input that the one which was used to compute the key.

Obviously, the function should be the only one to access this data.

There is a concept in SPARK, the one of variable package and  
state-machines. May be there could be some room for a concept which could  
be something like function-package ?

Just some seed of an idea... will think about it some later day

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (2 preceding siblings ...)
  2010-05-13  4:00 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13 16:54 ` Yannick Duchêne (Hibou57)
  2010-05-13 17:15   ` SPARK Rod Chapman
  2010-05-14  1:20 ` SPARK Yannick Duchêne (Hibou57)
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13 16:54 UTC (permalink / raw)


An AdaCore/Praxis webinar, talks about a tool which make VCG files more  
readable, producing a graph representation of these, using GraphViz. This  
looks useful.

Go there to read or hear about it :  
http://www.adacore.com/home/products/gnatpro/webinars/ and choose “Getting  
started with SPARK (October 13, 2009)”, that's about in the last third  
part.

However, I was not able to find this in the SPARK installation directory  
and did not find more on the web.

Do someone know any points, docs or links ?

That's not required to run the Examiner nor the Simplifier, I know,  
however this really seems to make things more readable.

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 16:54 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13 17:15   ` Rod Chapman
  2010-05-13 19:43     ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 61+ messages in thread
From: Rod Chapman @ 2010-05-13 17:15 UTC (permalink / raw)


On May 13, 5:54 pm, Yannick Duchêne (Hibou57)
<yannick_duch...@yahoo.fr> wrote:
> An AdaCore/Praxis webinar, talks about a tool which make VCG files more  
> readable, producing a graph representation of these, using GraphViz. This  
> looks useful.

Run the Examiner with -vcg -debug=V
(see section 3.1.4 of the Examiner User Manual)...

Then go grab GraphViz from www.graphviz.org

Have fun...
 - Rod Chapman, SPARK Team, Praxis





^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 17:15   ` SPARK Rod Chapman
@ 2010-05-13 19:43     ` Yannick Duchêne (Hibou57)
  2010-05-13 20:05       ` SPARK Rod Chapman
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13 19:43 UTC (permalink / raw)


> Run the Examiner with -vcg -debug=V
> (see section 3.1.4 of the Examiner User Manual)...
OK. I had seen the debug option from the help command and though this was  
not the good one.
Finally, this is the “debug=d” which produce dot files for GraphViz.  
However, the v and V values also gives readable output in the form of a  
list (raw text).

Note for the Windows platform : the prefix for SPARK's command line  
switches is "/" instead of "-" ("/" is indeed the native Windows style for  
command line options).

> Have fun...
For sure I will :)


-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 19:43     ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-13 20:05       ` Rod Chapman
  2010-05-13 21:43         ` SPARK Yannick Duchêne (Hibou57)
  2010-05-14 14:47         ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 2 replies; 61+ messages in thread
From: Rod Chapman @ 2010-05-13 20:05 UTC (permalink / raw)


On May 13, 8:43 pm, Yannick Duchêne (Hibou57)
<yannick_duch...@yahoo.fr> wrote:
> > Run the Examiner with -vcg -debug=V
> > (see section 3.1.4 of the Examiner User Manual)...
>
> OK. I had seen the debug option from the help command and though this was  
> not the good one.
> Finally, this is the “debug=d” which produce dot files for GraphViz.  
> However, the v and V values also gives readable output in the form of a  
> list (raw text).

-debug=d dumps expression DAGS in DOT format.

-debug=v or V produces output on the screen AND also produces
DOT format for the VCG graph(s) alongside the generated .vcg
files.  if you use -debug=V and then look at the sequence of generated
graphs in numerical order, you'll see how the VC-generator works!

> Note for the Windows platform : the prefix for SPARK's command line  
> switches is "/" instead of "-" ("/" is indeed the native Windows style for  
> command line options).

We have now switched to use "-" on all platforms, so it's best
to use "-" on Windows now...
 - Rod



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 20:05       ` SPARK Rod Chapman
@ 2010-05-13 21:43         ` Yannick Duchêne (Hibou57)
  2010-05-14 14:47         ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-13 21:43 UTC (permalink / raw)


Le Thu, 13 May 2010 22:05:05 +0200, Rod Chapman  
<roderick.chapman@googlemail.com> a écrit:
> -debug=d dumps expression DAGS in DOT format.
>
> -debug=v or V produces output on the screen AND also produces
> DOT format for the VCG graph(s) alongside the generated .vcg
> files.  if you use -debug=V and then look at the sequence of generated
> graphs in numerical order, you'll see how the VC-generator works!
Yes, was confused by previous runs which has left some .dot files in the  
place.

The HTML output (-html) seems also recommended, as it produce even more  
handily browsable stuff than only relying on the Location pane of GPS.

If this can ever be useful, here is a simple Windows script to generate  
all images for each dot files. For Windows users, you can then simply open  
any one image and watch all using the Windows image-slide capabilities.  
Doing so, you can quickly switch from one image to another, to see these  
like in a stroboscopic animation.

rem ----- Graphs_To_Images.bat ---------------------------

@echo off
cls

rem Set you input and input directory here. The Dot_Exe_Path variable
rem is the path to your dot binary. If in the PATH, simply leave it as
rem dot.exe. Input_Directory is the one you gave to the Examiner with its
rem output_directory option.
set Input_Directory=SPARK
set Output_Directory=Graphs_Images
set Dot_Exe_Path=dot.exe

rem Ensure the output directory exists
if not exist %Output_Directory% md %Output_Directory%

rem Cleanup the content of the output directory to not have
rem any previous images which would now be "garbages".
del /Q %Output_Directory%\*.png
del /Q %Output_Directory%\*.gif
del /Q %Output_Directory%\*.svg

rem For each .dot file in input directory, create a PNG image in output
rem directory. The images names are after the ones of the .dot files.
for %%f in (%Input_Directory%\*.dot) do %Dot_Exe_Path% -Tpng  
-o%Output_Directory%\%%~nf.png %%f

rem Cleanup .dot files as well, to not have a cluttered directory (we may
rem want to quickly find relevant .vcg and report files).
del /Q %Input_Directory%\*.dot

rem For safety, assign an empty strings to variables, as these are always
rem global with Windows cmd/bat files.
set Input_Directory=
set Output_Directory=
set Dot_Exe_Path=

rem ----- End of file ------------------------------------

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (3 preceding siblings ...)
  2010-05-13 16:54 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  1:20 ` Yannick Duchêne (Hibou57)
  2010-05-14  4:15   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-14  3:07 ` SPARK Yannick Duchêne (Hibou57)
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14  1:20 UTC (permalink / raw)


Hello, once again playing with checker and examiner,

Within all the documentations installed with SPARK, there are some named  
SPARK_QRGn.pdf. The one SPARK_QRG1.pdf contains a summary of annotations  
to be used with the language.

I wonder if this document is part of the language definition or, if not,  
if I'm missing some documentations somewhere.

The reason of this doubt is, an example, the “--# assert ... ;” annotation  
which is documented in SPARK_QRG1, and in no other documentation I've  
read. I did not read all deeply at the time, and just had an overview of  
some. Anyway, I searched the “Examiner_UM.pdf”, “SPARK95.pdf” among others  
for “# assert” in the text, and did not found any occurrence. I can see it  
only in “SPARK_QRG1.pdf”.

Is this file part of the language definition or did I missed some relevant  
documentation elsewhere ?

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (4 preceding siblings ...)
  2010-05-14  1:20 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  3:07 ` Yannick Duchêne (Hibou57)
  2010-05-14  3:26   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-14  8:11   ` SPARK Phil Thornley
  2010-05-14 21:45 ` SPARK Yannick Duchêne (Hibou57)
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14  3:07 UTC (permalink / raw)


The Quick Reference Card #1 talks about “type assertion”:

> A type assertion allows the user to specify the base type which the
> compiler will associate with a signed integer type. The base type
> must be supplied to the Examiner in the configuration file. This
> assertion allows the Examiner to generate VCs which are much more
> readily discharged [RTC 5.2].
>
> Example:
> type T is range 1 .. 10
> --# assert T’Base is Short_Short_Integer;

As the Examiner ignore any representation clauses [SPARK 95 (13.1)], it  
cannot care to any implicit representation clause either. So what's the  
purpose of this kind of assertions ? Does the examiner check something  
else which is not only the range ? Does it have something to deal with  
type conversions ?

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  3:07 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  3:26   ` Yannick Duchêne (Hibou57)
  2010-05-14  8:11   ` SPARK Phil Thornley
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14  3:26 UTC (permalink / raw)


Suspected error.

Unless I'm wrong for some reason, it seems the Quick Reference Card #1  
contains an error.

About “return annotation”, it give the following exemple:

    function Max(X, Y: Integer) return Integer;
    --# return M => (A >= B -> M = X) and
    --#             (B >= A -> M = Y);

What're A and B if they aren't X and Y ?

-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  1:20 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  4:15   ` Yannick Duchêne (Hibou57)
  2010-05-14  8:17     ` SPARK Phil Thornley
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14  4:15 UTC (permalink / raw)


> The reason of this doubt is, an example, the “--# assert ... ;”  
> annotation which is documented in SPARK_QRG1, and in no other  
> documentation I've read. I did not read all deeply at the time, and just  
> had an overview of some. Anyway, I searched the “Examiner_UM.pdf”,  
> “SPARK95.pdf” among others for “# assert” in the text, and did not found  
> any occurrence. I can see it only in “SPARK_QRG1.pdf”.
>
> Is this file part of the language definition or did I missed some  
> relevant documentation elsewhere ?

Found : it is detailed in the file Examiner_GenVCs.pdf (Generation of VCs  
for SPARK Programs)


-- 
pragma Asset ? Is that true ? Waaww... great



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  3:07 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-14  3:26   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  8:11   ` Phil Thornley
  2010-05-14 14:28     ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 61+ messages in thread
From: Phil Thornley @ 2010-05-14  8:11 UTC (permalink / raw)


On 14 May, 04:07, Yannick Duchêne (Hibou57) <yannick_duch...@yahoo.fr>
wrote:
> The Quick Reference Card #1 talks about “type assertion”:
>
> > A type assertion allows the user to specify the base type which the
> > compiler will associate with a signed integer type. The base type
> > must be supplied to the Examiner in the configuration file. This
> > assertion allows the Examiner to generate VCs which are much more
> > readily discharged [RTC 5.2].
>
> > Example:
> > type T is range 1 .. 10
> > --# assert T’Base is Short_Short_Integer;
>
> As the Examiner ignore any representation clauses [SPARK 95 (13.1)], it  
> cannot care to any implicit representation clause either. So what's the  
> purpose of this kind of assertions ? Does the examiner check something  
> else which is not only the range ? Does it have something to deal with  
> type conversions ?
>
> --
> pragma Asset ? Is that true ? Waaww... great

Yannick,

These base type assertions are used to get the ranges for overflow
checks - see Section 5 of Generation of RTCs for SPARK Programs
(GenRTCs).

The Examiner can't validate this assertion, so it's up to the user to
get it right (but you might be able to use compiler assertions to
check it).  It is safe to assert a smaller base type than the compiler
actually chooses, so the portability problem can be reduced by
specifying the smallest possible base type.

Cheers,

Phil



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  4:15   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14  8:17     ` Phil Thornley
  2010-05-14  9:32       ` SPARK Rod Chapman
  2010-05-14 14:20       ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 2 replies; 61+ messages in thread
From: Phil Thornley @ 2010-05-14  8:17 UTC (permalink / raw)


On 14 May, 05:15, Yannick Duchêne (Hibou57) <yannick_duch...@yahoo.fr>
wrote:

> > Is this file part of the language definition or did I missed some  
> > relevant documentation elsewhere ?
>
> Found : it is detailed in the file Examiner_GenVCs.pdf (Generation of VCs  
> for SPARK Programs)

Yannick,

As you are discovering, the documentation for the optional proof
assertions in SPARK is not easy to find (it is spread across several
documents, and rather scattered within those documents).

You might find it helps to look at the tutorials on www.sparksure.com
(one set for the proof annotations and one set for the Proof Checker).

Cheers,

Phil



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  8:17     ` SPARK Phil Thornley
@ 2010-05-14  9:32       ` Rod Chapman
  2010-05-14 14:20       ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Rod Chapman @ 2010-05-14  9:32 UTC (permalink / raw)


> As you are discovering, the documentation for the optional proof
> assertions in SPARK is not easy to find (it is spread across several
> documents, and rather scattered within those documents).

Noted.  For SPARK Pro 9.0, we merged all the Proof material
into one manual and gave it the rather more obvious title "Proof
Manual".
We also produced a one-page clickable index to _all_ the manuals,
which has also proven useful.
 - Rod Chapman, SPARK Team



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  8:17     ` SPARK Phil Thornley
  2010-05-14  9:32       ` SPARK Rod Chapman
@ 2010-05-14 14:20       ` Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14 14:20 UTC (permalink / raw)


Le Fri, 14 May 2010 10:17:50 +0200, Phil Thornley  
<phil.jpthornley@googlemail.com> a écrit:
> As you are discovering, the documentation for the optional proof
> assertions in SPARK is not easy to find (it is spread across several
> documents, and rather scattered within those documents).
I wouldn't say so much. There was actually in the QR #1, a little “[VCG  
3]: ...” and at the bottom of the card, something like “[VCG] Generation  
of VCs for SPARK Programs”, so cross references are there ; this was just  
that I missed one file (I finally moved all release notes documents  
elsewhere to make the docs directory more handy).

> You might find it helps to look at the tutorials on www.sparksure.com
> (one set for the proof annotations and one set for the Proof Checker).
Seems good material :) The link goes to my bookmarks (will tell about it  
at fr.comp.lang.ada also).

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-14  8:11   ` SPARK Phil Thornley
@ 2010-05-14 14:28     ` Yannick Duchêne (Hibou57)
  0 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14 14:28 UTC (permalink / raw)


Le Fri, 14 May 2010 10:11:29 +0200, Phil Thornley  
<phil.jpthornley@googlemail.com> a écrit:
> It is safe to assert a smaller base type than the compiler
> actually chooses, so the portability problem can be reduced by
> specifying the smallest possible base type.
Oh, yes, I see. Thanks Phil

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 20:05       ` SPARK Rod Chapman
  2010-05-13 21:43         ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14 14:47         ` Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14 14:47 UTC (permalink / raw)


Le Thu, 13 May 2010 22:05:05 +0200, Rod Chapman  
<roderick.chapman@googlemail.com> a écrit:
> We have now switched to use "-" on all platforms, so it's best
> to use "-" on Windows now...
>  - Rod
At least with the integration to GPS on Windows, the project properties  
tab for Examiner, does not work if "-" is used : option not recognized and  
no feed back in the sheet view. Only "/" works there (may be this is  
different now with SPARK 9, I don't know).

An example to check : in the command line options input field (at the  
bottom), writing “-index_file=my_index.idx”, makes the the corresponding  
input field to become empty (at the top of the property sheet), and it  
comes back if “-index_file” is replaced with “/index_file”.

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (5 preceding siblings ...)
  2010-05-14  3:07 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-14 21:45 ` Yannick Duchêne (Hibou57)
  2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14 21:45 UTC (permalink / raw)


Anecdotal comment, surely not so much important :p

In multiple places in various documentations (of any sources), a sentence  
similar to “Proved by the Proof Checker” is used. Shouldn't this be  
“Proved with the Proof Checker” instead ? ... as the Proof Checker  
requires human interventions and is not automated.

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13  3:06 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-13  9:28   ` SPARK stefan-lucks
@ 2010-05-14 22:55   ` Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-14 22:55 UTC (permalink / raw)


Le Thu, 13 May 2010 05:06:23 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:
> Block statements are not allowed in SPARK. Let say a procedure can do  
> the same. Was this only to have one-matter-one-feature (avoid to have  
> both block-statement and procedure, and keep only one) for the sake of  
> simplicity or was there some other reasons properly dealing with  
> verifiability ?

Here is an answer:

A document from SparkSure ( http://www.sparksure.com/7.html ), namely  
Part4_Multiple_Paths.pdf, warns about consecutive conditional statements,  
giving the example of ten simple chained If statement which turns into  
1024 validation conditions. In this document, Phil logically advice to  
avoid such cases, using small procedures instead. Doing so, you end up  
with a single path instead of 1014.

So far this is not about blocks, however here comes the matter: block  
statements does not break such a chain of If statements and cannot be  
given annotations. Lack of annotations with block statement makes these  
ineffective to reach the goal of simplifying the program provability.

So why not annotations on block statements ? Well, simply because a block  
statement does not come with a signature and using a procedure instead of  
a block statement, requires you to better specify what was the role of  
that block. And adding extra-annotations to the language, to be used with  
blocks, would just load the language with unnecessary complications (not  
welcome in this area)

Finally, block statements are disallowed to help avoid explosive  
complexity.

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-13 16:48     ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 13:09       ` Peter C. Chapin
  0 siblings, 0 replies; 61+ messages in thread
From: Peter C. Chapin @ 2010-05-15 13:09 UTC (permalink / raw)


Yannick Duchêne (Hibou57) wrote:

>>
>>   f(g(I) + f(f(g(I)))**2,
>>
>> or to
>>
>>   f(f(g(I))) + f(g(I))**2.
> A function optimized (which save computing of already computed inputs)  
> using memoisation, does not exposes this behavior.

If it works properly, yes, but what if it doesn't?

> How to make it formal : may be giving the proof the function is the only  
> one to access memoisation storage between function invokation, so, that  
> this data are mechanically private to the function. Then, demonstrate  
> there is two paths to the result : one which is the basic computation and  
> one which retrieve the result of a similar previous computation. This  
> latter step could rely on the computation of a key to access this data.  
> Then demonstrate that for a given input, the computed key is always the  
> same (input <-> key association). Now, the hard part may be to demonstrate  
> the key is used to access the result which was computed previously for the  
> same input that the one which was used to compute the key.
> 
> Obviously, the function should be the only one to access this data.
> 
> There is a concept in SPARK, the one of variable package and  
> state-machines. May be there could be some room for a concept which could  
> be something like function-package ?

While this might be possible, it does sound rather complicated. I can see why
SPARK doesn't currently go down this path.

Peter




^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (6 preceding siblings ...)
  2010-05-14 21:45 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 16:41 ` Yannick Duchêne (Hibou57)
  2010-05-15 18:00   ` SPARK Yannick Duchêne (Hibou57)
                     ` (2 more replies)
  2010-05-15 22:48 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-16  5:28 ` SPARK Yannick Duchêne (Hibou57)
  9 siblings, 3 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 16:41 UTC (permalink / raw)


Currently, I'm reading Phil's documents, the ones in the directory  
Proof_Annotations and especially looking at exercises (yes, he created  
some) in part 5 to 9 and 12.

Part 5, exercise 4, turns to be something interesting to me, as it leads  
me to some questions about the way the simplifier is using hypothesis.

I would like to be more explicit. Here is the relevant part (slightly  
modified)


    ---------------------------------------------------------------------
    -- Exercise 5.4
    ---------------------------------------------------------------------
    procedure Limit
      (X     : in out Integer;
       Lower : in     Integer;
       Upper : in     Integer)
    --# pre    Lower < Upper;
    --# post  ((X~ < Lower) -> (X = Lower)) and
    --#       ((X~ > Upper) -> (X = Upper)) and
    --#       (X in Lower .. Upper);
    is
    begin
       if X < Lower then
          X := Lower;
       elsif X > Upper then
          X := Upper;
       end if;
    end Limit;

    function Percent_Of_Range
      (Data    : Integer;
       Lowest  : Integer;
       Highest : Integer) return Percent_Type
    --# pre    (Lowest < Highest) and                        -- (1)
    --#        ((Highest - Lowest) <= Integer'Last) and      -- (2)
    --#        (((Highest - Lowest) * 100) <= Integer'Last); -- (3)
    --#
    is
       Local_Data : Integer;
       Part : Integer;
       Whole : Integer;
    begin
       Local_Data := Data;                                   -- (4)
       Limit (Local_Data, Lowest, Highest);                  -- (5)
       Part := Local_Data - Lowest;                          -- (6)
       Whole := Highest - Lowest;                            -- (7)
       return Percent_Type((Part * 100) / Whole);            -- (8)
    end Percent_Of_Range;


What's funny here, is about precondition line (2). If it is removed, the  
precondition become “(Lowest < Highest) and (((Highest - Lowest) * 100) <=  
Integer'Last)” and the simplifier fails to prove line (6) : it cannot  
prove the result will be lower than Integer'Last. I first though “(Highest  
- Lowest) * 100 <= Integer'Last” would be enough to also implies “Highest  
- Lowest < Integer'Last” ; it seems it is not. If you add precondition  
part (2), there is no more trouble with line (6).

Is this because of the way it is using hypotheses or is this because it  
miss some built-in rules to get all benefits from precondition part (3) ?

Now, there is still another unproved condition... at line (8), the  
simplifier cannot prove “((Local_Data - Lowest) * 100) / (Highest -  
Lowest) <= 100”. A first anecdote : funny to see it has expanded variables  
Part and Whole.

I wanted to solve in three steps and added the following between (6) and  
(7)


       --# assert Part <= Whole; -- (6.1)
       --# assert (Part * 100) <= (Whole * 100); -- (6.2)
       --# assert ((Part * 100) / Whole) <= ((Whole * 100) / Whole); --  
(6.3)


But the simplifier was not able to prove (6.1) nor (6.3) and was still not  
able to prove (8). About (6.2), and wanted to have a test, and added the  
following between (6.2) and (6.3):


       --# assert ((Part * 100) <= (Whole * 100)) ->
       --#        (((Part * 100) / Whole) <= ((Whole * 100) / Whole));


It was not able to prove it. Do I miss something or does it simply means  
the simplifier does not know about a rule which seems basic to me, that is  
(A <= B) -> ((A / C) <= (B / C)) ? ... which is still valid for integer  
arithmetic.

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 18:00   ` Yannick Duchêne (Hibou57)
  2010-05-15 18:14   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-15 18:43   ` SPARK Phil Clayton
  2 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 18:00 UTC (permalink / raw)


> It was not able to prove it. Do I miss something or does it simply means  
> the simplifier does not know about a rule which seems basic to me, that  
> is (A <= B) -> ((A / C) <= (B / C)) ? ... which is still valid for  
> integer arithmetic.
Sorry for a mistake, it should have been ((A <= B) and (A >= 0) and (C >  
0)) -> ((A / C) <= (B / C)), to keep it simple and just to talk about  
what's relevant here.


-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-15 18:00   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 18:14   ` Yannick Duchêne (Hibou57)
  2010-05-15 19:08     ` SPARK Yannick Duchêne (Hibou57)
  2010-05-15 18:43   ` SPARK Phil Clayton
  2 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 18:14 UTC (permalink / raw)


Another example, as much strange to me.


    procedure Limit
      (X     : in out Integer;
       Lower : in     Integer;
       Upper : in     Integer)
    --# pre    Lower < Upper;
    --# post  ((X~ < Lower) -> (X = Lower)) and
    --#       ((X~ > Upper) -> (X = Upper)) and
    --#       (X in Lower .. Upper);

    -- ....


       Limit (Local_Data, Lowest, Highest);
       Part := Local_Data - Lowest;
       Whole := Highest - Lowest;
       --# assert Part >= 0;
       --# assert Local_Data <= Highest;

    -- ....

The simplifier cannot prove “Local_Data <= Highest”, although the post  
condition of Limit states “X in Lower .. Upper” so that it should know  
“Local_Data in Lowest .. Highest” and then “Local_Data <= Highest”.

While perhaps I'm not using it the good way (as a beginner, this may be a  
possible case).

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
  2010-05-15 18:00   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-15 18:14   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 18:43   ` Phil Clayton
  2010-05-15 19:12     ` SPARK Yannick Duchêne (Hibou57)
  2 siblings, 1 reply; 61+ messages in thread
From: Phil Clayton @ 2010-05-15 18:43 UTC (permalink / raw)


On May 15, 5:41 pm, Yannick Duchêne (Hibou57)
<yannick_duch...@yahoo.fr> wrote:
>
>        --# assert ((Part * 100) <= (Whole * 100)) ->
>        --#        (((Part * 100) / Whole) <= ((Whole * 100) / Whole));
>
> It was not able to prove it. Do I miss something or does it simply means  
> the simplifier does not know about a rule which seems basic to me, that is  
> (A <= B) -> ((A / C) <= (B / C)) ? ... which is still valid for integer  
> arithmetic.

This isn't valid for all values of A, B and C: consider C = -1.  It
would be sufficient to know 0 < C, so perhaps sufficient information
needs to be propagated to show 0 < Whole ?

Phil C.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 18:14   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 19:08     ` Yannick Duchêne (Hibou57)
  2010-05-15 20:23       ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 19:08 UTC (permalink / raw)


Oh, something interesting:


    procedure Limit
      (X     : in out Integer;
       Lower : in     Integer;
       Upper : in     Integer)
    --# pre    Lower < Upper;
    --# post  ((X~ < Lower) -> (X = Lower)) and
    --#       ((X~ > Upper) -> (X = Upper)) and
    --#       (X in Lower .. Upper);

    -- .....


       Limit (Local_Data, Lowest, Highest);
       --# assert Local_Data <= Highest;  -- (1)
       Part := Local_Data - Lowest;
       Whole := Highest - Lowest;
       --# assert Part >= 0;              -- (2)
       --# assert Local_Data <= Highest;  -- (3)

    -- .....


If (2) is removed, it can prove (1) and (3). If (2) is there, it fails to  
prove (3), while nothing changes Local_Data in the path from (1) to (3),  
and so (3) should be as much provable as (1).

When (2) is there, it seems to forget about the previously proved  
hypothesis (1) or about hypotheses which make (1) provable.

Will have to better understand how this “thing” thinks.

-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 18:43   ` SPARK Phil Clayton
@ 2010-05-15 19:12     ` Yannick Duchêne (Hibou57)
  2010-05-15 21:02       ` SPARK Phil Clayton
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 19:12 UTC (permalink / raw)


> This isn't valid for all values of A, B and C: consider C = -1.  It
> would be sufficient to know 0 < C, so perhaps sufficient information
> needs to be propagated to show 0 < Whole ?
>
> Phil C.
Yes, you are right, that's why I had previously apologized for this error  
and corrected it in a reply to the erroneous message.

Your exercises are interesting BTW.


-- 
There is even better than a pragma Assert: an --# assert



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 19:08     ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 20:23       ` Yannick Duchêne (Hibou57)
  2010-05-16 18:13         ` SPARK Peter C. Chapin
  2010-05-16 18:17         ` SPARK Phil Thornley
  0 siblings, 2 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 20:23 UTC (permalink / raw)


Le Sat, 15 May 2010 21:08:05 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:

> Oh, something interesting:
>
>
>     procedure Limit
>       (X     : in out Integer;
>        Lower : in     Integer;
>        Upper : in     Integer)
>     --# pre    Lower < Upper;
>     --# post  ((X~ < Lower) -> (X = Lower)) and
>     --#       ((X~ > Upper) -> (X = Upper)) and
>     --#       (X in Lower .. Upper);
>
>     -- .....
>
>
>        Limit (Local_Data, Lowest, Highest);
>        --# assert Local_Data <= Highest;  -- (1)
>        Part := Local_Data - Lowest;
>        Whole := Highest - Lowest;
>        --# assert Part >= 0;              -- (2)
>        --# assert Local_Data <= Highest;  -- (3)
>
>     -- .....
>
>
> If (2) is removed, it can prove (1) and (3). If (2) is there, it fails  
> to prove (3), while nothing changes Local_Data in the path from (1) to  
> (3), and so (3) should be as much provable as (1).

If “assert” is replaced by “check” then it can prove (3) even if (2) is  
there. So that


      procedure Limit
        (X     : in out Integer;
         Lower : in     Integer;
         Upper : in     Integer)
      --# pre    Lower < Upper;
      --# post  ((X~ < Lower) -> (X = Lower)) and
      --#       ((X~ > Upper) -> (X = Upper)) and
      --#       (X in Lower .. Upper);

      -- .....


         Limit (Local_Data, Lowest, Highest);
         --# check Local_Data <= Highest;  -- (1)
         Part := Local_Data - Lowest;
         Whole := Highest - Lowest;
         --# check Part >= 0;              -- (2)
         --# check Local_Data <= Highest;  -- (3)

      -- .....

is OK.

So let's talk about “assert” vs “check”.

[Generation of VCs for SPARK Programs (2.2)] says:
> each assert or check statement in the code is located at a point in
> between two executable statements, in general, ie it is associated
> with a point on the arc of the flowchart of the subprogram which
> passes between the two statements it appears between. Each such
> assertion specifies a relation between program variables which must
> hold at that precise point, whenever execution reaches it. Assert
> statements generate a cut point on the flowchart of the subprogram,
> check statements do not.

What does “cut point” means precisely ? Is it related or similar to the  
unfortunately too much famous Prolog's cut ? This seems to be the next  
question to be answered, and a focus to care about for peoples learning  
SPARK.

Is that this “cut point” which makes Simplifier to forget about previous  
hypotheses when “assert” is used ? (if that's really similar to Prolog's  
cut, then indeed, it likely to make the prover loose memory)

About one of the most worthy thing with proving : what's the better way to  
give hints to the prover ? Is it “assert” or “check” ? This example  
suggest the best way is “check”. Is this example representative of most  
cases or a just a special case ?


--
There is even better than a pragma Assert: an --# assert (or a --#  
check.... question pending)



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 19:12     ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 21:02       ` Phil Clayton
  0 siblings, 0 replies; 61+ messages in thread
From: Phil Clayton @ 2010-05-15 21:02 UTC (permalink / raw)


On May 15, 8:12 pm, Yannick Duchêne (Hibou57)
<yannick_duch...@yahoo.fr> wrote:
> Yes, you are right, that's why I had previously apologized for this error  
> and corrected it in a reply to the erroneous message.

Yes, I hadn't updated the web page to see your latest message.


> Your exercises are interesting BTW.

They're not mine - I'm a different Phil!  An unfortunate overloading
issue here...

Phil C.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (7 preceding siblings ...)
  2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-15 22:48 ` Yannick Duchêne (Hibou57)
  2010-05-16  1:48   ` SPARK Yannick Duchêne (Hibou57)
  2010-05-16  5:28 ` SPARK Yannick Duchêne (Hibou57)
  9 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-15 22:48 UTC (permalink / raw)


Ok, from some branch of this thread, you may have learned a question have  
raised about which one of “assert” or “check” should be used to write  
in-text proofs where the Simplifier could not prove it it/his/her self.

The answer to this is so much important that I give the answer to it from  
the root of this thread, instead of from the latter leaf.

So it is : Use Check, not Assert.

Here is a concrete example of what you can do with Check and which does  
not work with Assert. The example is detailed, I mean, exposes detailed  
proofs, over what may have been sufficient. This is because I like  
explicit and expressive things, and this was also to be sure nobody could  
miss a single step of the proof and nobody could have any doubt about the  
usefulness of such a approach in software conception. That's also the  
reason why the exercise upon which this is based, was a bit modified (like  
using two intermediate variables instead of subexpressions.. I do that  
oftenly on my side).

Here is :


    procedure Limit
      (X     : in out Integer;
       Lower : in     Integer;
       Upper : in     Integer)
    --# pre    Lower < Upper;
    --# post  ((X~ < Lower) -> (X = Lower)) and
    --#       ((X~ > Upper) -> (X = Upper)) and
    --#       (X in Lower .. Upper);
    is
    begin
       if X < Lower then
          X := Lower;
       elsif X > Upper then
          X := Upper;
       end if;
    end Limit;

    function Percent_Of_Range
      (Data    : Integer;
       Lowest  : Integer;
       Highest : Integer) return Percent_Type
    --# pre    (Lowest < Highest) and
    --#        ((Highest - Lowest) <= Integer'Last) and
    --#        (((Highest - Lowest) * 100) <= Integer'Last);
    is
       Local_Data : Integer;
       Part       : Integer;
       Whole      : Integer;
    begin
       Local_Data := Data;
       -- Local copy of Data to be allowed to modify it. We only need
       -- a modified version locally, the caller does not need this.

       Limit
         (X     => Local_Data, -- in out.
          Lower => Lowest,     -- in
          Upper => Highest);   -- in.

       Part  := Local_Data - Lowest;
       Whole := Highest - Lowest;

       -- We are to prove that ((Part * 100) / Whole) is in 0 .. 100.

       -- (1) Prove that Part >= 0.
       --# check Local_Data in Lowest .. Highest;
       --# check Local_Data >= Lowest;
       --# check (Local_Data - Lowest) >= 0;
       --# check Part >= 0;

       -- Given (1), it's obvious that (Part * 100) >= 0.

       -- (2) Prove that Whole is > 0.
       --# check Lowest < Highest;
       --# check Highest > Lowest;
       --# check (Highest - Lowest) > 0;
       --# check Whole > 0;

       -- Given (2), it's obvious that ((Part * 100) / Whole) is valid.
       -- Given (1) and (2), it's obvious that ((Part * 100) / Whole) >= 0.

       -- (3) Prove that (Whole >= Part). This is required for (4) which is
       --    to come.
       --# check Local_Data in Lowest .. Highest;
       --# check Highest >= Local_Data;
       --# check (Highest - Lowest) >= (Local_Data - Lowest);
       --# check Whole >= Part;

       -- (4) Prove that ((Part * 100) / Whole) is less or equal to 100.
       --# check Part <= Whole;
       --# check (Part * 100) <= (Whole * 100);
       --# check ((Part * 100) / Whole) <= ((Whole * 100) / Whole);
       --# check ((Part * 100) / Whole) <= 100;

       -- (5) Prove that the subexpression (Part * 100) will not
       --     commit an overflow.
       --# check (((Highest - Lowest) * 100) <= Integer'Last);
       --# check (Whole * 100) <= Integer'Last;
       --# check Part <= Whole;
       --# check (Part * 100) <= Integer'Last;

       -- Given (1), we know ((Part * 100) / Whole) >= 0.
       -- Given (4), we know ((Part * 100) / Whole) <= 100.
       -- Given (5), we we know (Part * 100) will not commit an overflow.
       -- Given (1) and (2) and(5), the following statement is then proved  
to
       -- be valid and meaningful.

       return Percent_Type ((Part * 100) / Whole);

    end Percent_Of_Range;

A working example of a step by step proof of every things required to be  
proven.

Waaw... I like it so much, I love it. This is the way I was thinking  
software should be built for so long ! I was waiting for such a thing for  
about 14 years you know !

I use to learn about VDM when I was younger, hire books about it in  
libraries, use to think about creating my own implementation of VDM, but  
failed to find the language specification (and was frightened about the  
idea to setup my own language in this area).

And now, I see SPARK can do nearly the same.

Well, although wonderful, that's still not perfect : why is there a  
Checker (which I didn't used here) and why no provision to write in-text  
what is done with the Checker ? That's what I was attempting to do here.  
It works, but I miss something like explicit “from (1) and (2) infer ....  
” etc. I cannot give names or numbers to Check clauses and cannot  
explicitly refer to these. I cannot nor explicitly refer to precondition  
in inferences (like in (5), where the reference to a precondition is  
implicit, not explicit).

Is there some provision for such a thing ?

Yes, I may wish too much... that's not perfect, while still wonderful. Do  
you feel too ? :)


-- 
There is even better than a pragma Assert: an --# assert (or a --#  
check... question pending)



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 22:48 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-16  1:48   ` Yannick Duchêne (Hibou57)
  2010-05-16  1:53     ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-16  1:48 UTC (permalink / raw)


Le Sun, 16 May 2010 00:48:33 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:

> Ok, from some branch of this thread, you may have learned a question  
> have raised about which one of “assert” or “check” should be used to  
> write in-text proofs where the Simplifier could not prove it it/his/her  
> self.
>
> The answer to this is so much important that I give the answer to it  
> from the root of this thread, instead of from the latter leaf.
>
> So it is : Use Check, not Assert.
> [...]

Part 6 of Phil's document says:

> For both check and assert, there is a VC generated that has
> the current program state as hypotheses and the <Boolean
> expression> as the conclusion.
I've meet something different (see previous messages in this thread), or  
at least, the current state is not represented with the same set of  
hypotheses.


> For code that does not contain any loops, there is
> (in principle) never any need for either of these
> annotations since they cannot make unprovable VCs
> into provable VCs.
Sorry, I can't buy that at all. I could make VCs provable, and this was  
otherwise not provable by the simplifier, using Check clauses.

Sorry friend, I'm not dreaming : this really happened.

-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-16  1:48   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-16  1:53     ` Yannick Duchêne (Hibou57)
  0 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-16  1:53 UTC (permalink / raw)


> Sorry, I can't buy that at all. I could make VCs provable, and this was  
> otherwise not provable by the simplifier, using Check clauses.
>
> Sorry friend, I'm not dreaming : this really happened.
No, OK, I'm wrong : miss-understood the meaning of unprovable in this  
context.

You're OK.

-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
                   ` (8 preceding siblings ...)
  2010-05-15 22:48 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-16  5:28 ` Yannick Duchêne (Hibou57)
  2010-05-18 18:01   ` SPARK Yannick Duchêne (Hibou57)
  9 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-16  5:28 UTC (permalink / raw)


Request for confirmation.

In Praxis's documentations, there is a file named Checker_Rules.pdf, title  
is “SPADE Proof Checker Rules Manual”, which list Checker's built-in  
rules. I do not see any reason why the Examiner/Simplifier would have a  
different rules set than the Checker, so I suppose this rules set is also  
the one used by the Examiner/Simplifier. I would just want to be sure :  
someone can confirm ?

If it is, this may explain why I meet some troubles proving one thing with  
real arithmetic (yes, I know real arithmetic is not safe and I must care,  
but this is just an exercise...)

-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 20:23       ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-16 18:13         ` Peter C. Chapin
  2010-05-17  0:59           ` SPARK Yannick Duchêne (Hibou57)
  2010-05-16 18:17         ` SPARK Phil Thornley
  1 sibling, 1 reply; 61+ messages in thread
From: Peter C. Chapin @ 2010-05-16 18:13 UTC (permalink / raw)


Yannick Duchêne (Hibou57) wrote:

> Is that this “cut point” which makes Simplifier to forget about previous  
> hypotheses when “assert” is used ? (if that's really similar to Prolog's  
> cut, then indeed, it likely to make the prover loose memory)
> 
> About one of the most worthy thing with proving : what's the better way to  
> give hints to the prover ? Is it “assert” or “check” ? This example  
> suggest the best way is “check”. Is this example representative of most  
> cases or a just a special case ?

My understanding is that assert prevents the simplifier from using previous
material as hypotheses in following verification conditions---as you noticed
in your experiments. John Barnes talks about this in his book a little.
Mostly I think assert is intended for use in loops. Without it, SPARK needs
to consider each loop iteration as a spearate path which is clearly a problem
(since loops iterate a dynamic number of times). The assert "cuts" the loop
and reduces the problem to just

1. The path into the loop for the first time to the assert.
2. The path around the loop from assert to assert.
3. The path from the assert to whatever follows the loop.

It is necessary, I think, for SPARK to forget about previous "stuff" when
handling the assert since otherwise it would have to consider all the
previous loop iterations.

The check annotation asks SPARK to prove the check and then it can *add* that
information to the hypotheses of the following verification conditions rather
than start from scratch.

So to summarize perhaps a good rule might be to use assert to express loop
invariants and check for everything else. I welcome other comments on this.
I'm learning as well.

Peter




^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-15 20:23       ` SPARK Yannick Duchêne (Hibou57)
  2010-05-16 18:13         ` SPARK Peter C. Chapin
@ 2010-05-16 18:17         ` Phil Thornley
  2010-05-17  1:24           ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 61+ messages in thread
From: Phil Thornley @ 2010-05-16 18:17 UTC (permalink / raw)


On 15 May, 21:23, Yannick Duchêne (Hibou57) <yannick_duch...@yahoo.fr>
wrote:
[big snip :-]

> What does “cut point” means precisely ? Is it related or similar to the  
> unfortunately too much famous Prolog's cut ? This seems to be the next  
> question to be answered, and a focus to care about for peoples learning  
> SPARK.

There may be multiple paths to any point (before/after/between
executeable statements).  For the purposes of VC generation, a "cut
point" is a point in the code that terminates all the paths that reach
it and there is a single path starting at that point.

The hypotheses of VCs include the local state that has been defined
along a path, and each path to a point will have different versions of
these hypotheses.

Therefore the hypotheses that are generated in VCs that follow a cut
point cannot include any of the hypotheses that have been included in
the VCs before the cut point - unless they are included in the
assertion at the cut point and therefore proved to be true for every
path that reaches that cut point.


> About one of the most worthy thing with proving : what's the better way to  
> give hints to the prover ? Is it “assert” or “check” ? This example  
> suggest the best way is “check”. Is this example representative of most  
> cases or a just a special case ?

You should always use 'check' when giving the Simplifier a hint - then
you don't lose any of the local state information.

Another key point to understand is that the Simplifier does not prove
all provable VCs (so when the tutorials ask you to "make all VCs
provable" don't expect to get all the VCs proved by the Simplifier.)

There are (at least) three ways to deal with these (other than
changing the code itself).
1) Give the Simplifier 'hints' by adding check annotations.
2) Use the Proof Checker to prove the VCs left after simplfication.
3) Give the Simplifier more rules to use ('user rules').

My experience is that trying 1) can get very frustrating as the
Simplifier sometimes seems to be extremely stupid.
2) works OK but isn't practical for any software under development as
the proof scripts that are created (and are needed each time the VCs
are generated) are extremely fragile to minor changes in the software.
The current recommendation is to use 3) - create user rules as
appropriate.  These are easy to write and usually work as expected.
(But they are a very powerful mechanism, so are correspondingly
dangerous - it is quite easy to create unsound user rules.)

(If you are working through the tutorials in sequence then user rules
are in section 11 - the next to the last).

Cheers,

Phil



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-16 18:13         ` SPARK Peter C. Chapin
@ 2010-05-17  0:59           ` Yannick Duchêne (Hibou57)
  0 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-17  0:59 UTC (permalink / raw)


Le Sun, 16 May 2010 20:13:27 +0200, Peter C. Chapin <pcc482719@gmail.com>  
a écrit:
> My understanding is that assert prevents the simplifier from using  
> previous
> material as hypotheses in following verification conditions---as you  
> noticed
> in your experiments. John Barnes talks about this in his book a little.
> Mostly I think assert is intended for use in loops. Without it, SPARK  
> needs
> to consider each loop iteration as a spearate path
Here is the word, as you said, this cut the path. That's what I've learned  
later too. This starts a new path, with a new hypotheses set, and indeed,  
forget about all previous.

Now we know why it is so.

> [...]
> So to summarize perhaps a good rule might be to use assert to express  
> loop
> invariants and check for everything else. I welcome other comments on  
> this.
It seems all is there.

> I'm learning as well.
Great! Cheers


-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-16 18:17         ` SPARK Phil Thornley
@ 2010-05-17  1:24           ` Yannick Duchêne (Hibou57)
  0 siblings, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-17  1:24 UTC (permalink / raw)


Le Sun, 16 May 2010 20:17:49 +0200, Phil Thornley  
<phil.jpthornley@googlemail.com> a écrit:
> [...]
> Therefore the hypotheses that are generated in VCs that follow a cut
> point cannot include any of the hypotheses that have been included in
> the VCs before the cut point - [...]
Otherwise this would not be a cut-point any more, indeed, and starting  
with a new state, is not a side effect, it's the purpose.

> Another key point to understand is that the Simplifier does not prove
> all provable VCs (so when the tutorials ask you to "make all VCs
> provable" don't expect to get all the VCs proved by the Simplifier.)
Unfortunately, I wanted to prove every things (cheese)

TBH, I did not work on all, just some and just had a quick overview on  
others.

> There are (at least) three ways to deal with these (other than
> changing the code itself).
Sorry for that Er Professor, I've modified it a bit (re-cheese).

> 1) Give the Simplifier 'hints' by adding check annotations.
> 2) Use the Proof Checker to prove the VCs left after simplfication.
As said in another message, this was exactly the purpose : I wanted to  
prove without relying on the Checker, in order to have all proofs in the  
text. It seems to be possible, as long as one is explicit enough.

> 3) Give the Simplifier more rules to use ('user rules').
I would better like add a very few rule in the whole system, better than  
adding some rules here and there, all being package specific.

> My experience is that trying 1) can get very frustrating as the
> Simplifier sometimes seems to be extremely stupid.
That's an important matter. On an other thread, on the french c.l.a (where  
I’ve talked about the same), I was afraid some people may think SPARK is  
not working properly and is unusable for that reason. That's why I  
insisted a lot on the Check clause and to not be afraid to detail proofs  
as much as needed.

> 2) works OK but isn't practical for any software under development as
> the proof scripts that are created (and are needed each time the VCs
> are generated) are extremely fragile to minor changes in the software.
> The current recommendation is to use 3) - create user rules as
> appropriate.  These are easy to write and usually work as expected.
> (But they are a very powerful mechanism, so are correspondingly
> dangerous - it is quite easy to create unsound user rules.)
I would prefer the choice #3, which seems more logical to me. Just that  
this may be better to add these rules system wide better than on a package  
basis. If I'm not wrong, there is provision for that also.

The requirement for a rule to be sound, is indeed important. There may be  
some rule of thumb in that area, like check all special cases, all special  
ranges. As an example, before adding a rule on real, one may carefully  
check the rule is valid for multiple combinations of special values and  
bounds and range on each items appearing in the rule, like 0, 1, -1, 0 ..  
1, 0 .. -1, > 1, < -1, and so on. The same with set, there are special  
values as well : empty set/sequence, one element set/sequence, more than  
one, a set included in an other, etc.

In that area, I was wondering how built-in rules were  choose : from a  
well know set of scientist ? Empirically ? By expansion during SPARK's  
life ? Others ?

I was wondering about it, because I could not find one rule which seems  
important to me, and which does not seems to be part of the predefined set  
(well, there are actually +500 rules, however, may be some useful rules  
are still missing).

> (If you are working through the tutorials in sequence then user rules
> are in section 11 - the next to the last).
Yes, I've seen it, and I've also read Part 6, which is about the  
difference between Check and Assert (I've it later after the question you  
were answering).

Have a nice day (Peter as well, obviously)

Yannick D.

-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-16  5:28 ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-18 18:01   ` Yannick Duchêne (Hibou57)
  2010-05-19  8:09     ` SPARK Phil Thornley
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-18 18:01 UTC (permalink / raw)


Le Sun, 16 May 2010 07:28:16 +0200, Yannick Duchêne (Hibou57)  
<yannick_duchene@yahoo.fr> a écrit:

> Request for confirmation.
>
> In Praxis's documentations, there is a file named Checker_Rules.pdf,  
> title is “SPADE Proof Checker Rules Manual”, which list Checker's  
> built-in rules. I do not see any reason why the Examiner/Simplifier  
> would have a different rules set than the Checker, so I suppose this  
> rules set is also the one used by the Examiner/Simplifier. I would just  
> want to be sure : someone can confirm ?
>
> If it is, this may explain why I meet some troubles proving one thing  
> with real arithmetic (yes, I know real arithmetic is not safe and I must  
> care, but this is just an exercise...)

About adding rules.

The Praxis document named Checker_Rules.pdf says

> The tool Buildchlib is no longer shipped with the Checker
> following the release of version 2.05. The use of
> user-defined rule files is still permitted, through the
> consult command. For further information contact
> Praxis High Integrity Systems.
So there is no way to add a rule system-wide as I was expecting.

Attempting to change the content of an *.RUL file in the lib/checker/rules  
directory, has no effect at all. The rules seems really compiled inside  
Simplifier and Checker.

Here are the rules I was to add in NUMINEQS.RUL:

    inequals(122):  X/Y<1     may_be_deduced_from [ X>=0, Y>0, Y>X ].
    inequals(123):  X/Y<1     may_be_deduced_from [ X<=0, Y<0, Y<X ].
    inequals(124):  X/Y>(-1)  may_be_deduced_from [ X>=0, Y<0, (-Y)>X ].
    inequals(125):  X/Y>(-1)  may_be_deduced_from [ X<=0, Y>0, Y>(-X) ].

It works only if added in an .RLU file.

Note: writing something like ...

    inequals(124):  X/Y>(-1)  may_be_deduced_from [ X>=0, Y<0,  
abs(Y)>abs(X) ].

... although parsed correctly, will turns into a rule which may not be  
applied by Simplifier. It seems rules must be given in the most  
straightforward way. This may explain why ARITH.RUL contains such a thing:

    arith(1):    X*1             may_be_replaced_by X.
    arith(2):    1*X             may_be_replaced_by X.

Commutativity seems not expected to be automatically applied or attempted.

About the rules I was to add, I tried something like:

    inequals(122):  [ X/Y>=0, X/Y<1 ]    may_be_deduced_from [ X>=0, Y>0,  
Y>X ].

This is at least parsed without syntax error message, but not applied.

I've noticed Simplifier seems to read an .RUL file it is belongs to a  
directory where it is reading .VCG files (I don't why it do that). I've  
noticed that while I was playing a bit with it :p

Will have to search for another way to add rules system-wide, if possible.


Have a nice day



-- 
There is even better than a pragma Assert: a SPARK --# check.

Wanted: if you know about some though in the area of comparisons between  
SPARK and VDM, please, let me know. Will enjoy to talk with you about it.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-18 18:01   ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-19  8:09     ` Phil Thornley
  2010-05-19 20:38       ` SPARK Simon Wright
  2010-05-19 21:35       ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 2 replies; 61+ messages in thread
From: Phil Thornley @ 2010-05-19  8:09 UTC (permalink / raw)


On 18 May, 19:01, Yannick Duchêne (Hibou57) <yannick_duch...@yahoo.fr>
wrote:
> I've noticed Simplifier seems to read an .RUL file it is belongs to a  
> directory where it is reading .VCG files (I don't why it do that). I've  
> noticed that while I was playing a bit with it :p
>
> Will have to search for another way to add rules system-wide, if possible.

Yannick,

I don't think that the toolset provides what you are looking for*.

The only recognised way to supply additional rules to the Simplifier
are as user rules - the full details about this are in Section 7 of
the Simplifier user manual.  This section also describes how the user
rules are applied.

It is unfortunate that user rules are less effective than the rules
built into the Simplifier as they are only used once the normal
Simplifier strategies have failed to prove a conclusion (see the
introduction of Section 3).

For example an inference rule will only be used if it *directly*
proves a conclusion.
Section 7.2.1: "In order for the Simplifier to be able to make use of
the rule, it must find an instance of the rule by pattern-matching it
against the current VC (for inference rules, this essentially means
against the undischarged conclusions of the VC)"

Cheers,

Phil

* Except that, of course, the full source of the Simplifier is
available, and there is nothing to stop you making changes to it!



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-19  8:09     ` SPARK Phil Thornley
@ 2010-05-19 20:38       ` Simon Wright
  2010-05-19 21:27         ` SPARK Yannick Duchêne (Hibou57)
  2010-05-19 21:35       ` SPARK Yannick Duchêne (Hibou57)
  1 sibling, 1 reply; 61+ messages in thread
From: Simon Wright @ 2010-05-19 20:38 UTC (permalink / raw)


Phil Thornley <phil.jpthornley@googlemail.com> writes:

> * Except that, of course, the full source of the Simplifier is
> available, and there is nothing to stop you making changes to it!

I believe you need the non-free (ie, quite expensive) SICSTUS Prolog
system to build the tools from source (there is SICSTUS-special code
which doesn't work under GNU Prolog).

The GPL SPARK 8.1.1 binary release doesn't run well under Mac OS X Snow
Leopard (problems with exception handling in 64-bit executables).



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-19 20:38       ` SPARK Simon Wright
@ 2010-05-19 21:27         ` Yannick Duchêne (Hibou57)
  2010-05-20  6:21           ` SPARK Simon Wright
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-19 21:27 UTC (permalink / raw)


Le Wed, 19 May 2010 22:38:07 +0200, Simon Wright <simon@pushface.org> a  
écrit:

> Phil Thornley <phil.jpthornley@googlemail.com> writes:
>
>> * Except that, of course, the full source of the Simplifier is
>> available, and there is nothing to stop you making changes to it!
>
> I believe you need the non-free (ie, quite expensive)
Question on side: Why is it always either free or either expansive ? Why  
not multiple levels between both of these extremities ?

> SICSTUS Prolog
> system to build the tools from source (there is SICSTUS-special code
> which doesn't work under GNU Prolog).
You may be true, as I remember an old topic (about two one years ago and a  
half), of someone getting into trouble trying to build SPARK with his  
favorite Prolog compiler.

> The GPL SPARK 8.1.1 binary release doesn't run well under Mac OS X Snow
> Leopard (problems with exception handling in 64-bit executables).
Where does the SPARK executable makes use of exceptions ? I have read  
something asserting the SPARK system is it-self checked to be runtime  
exceptions free. Perhaps this is in an external dependency part.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-19  8:09     ` SPARK Phil Thornley
  2010-05-19 20:38       ` SPARK Simon Wright
@ 2010-05-19 21:35       ` Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-19 21:35 UTC (permalink / raw)


Le Wed, 19 May 2010 10:09:33 +0200, Phil Thornley  
<phil.jpthornley@googlemail.com> a écrit:
> For example an inference rule will only be used if it *directly*
> proves a conclusion.
> Section 7.2.1: "In order for the Simplifier to be able to make use of
> the rule, it must find an instance of the rule by pattern-matching it
> against the current VC (for inference rules, this essentially means
> against the undischarged conclusions of the VC)"
Thanks for the detail Phil.

> * Except that, of course, the full source of the Simplifier is
> available, and there is nothing to stop you making changes to it!
Simon has tell about a limit in this area. I see another, which is the  
biggest to me : this would be a kind of forking, and a fork is not any  
more something common. This would be a valid way for my own use only,  
which could not be shared any more ; someone else could not check  
something from me using his/her own SPARK installation. I use to be  
interested is some specifications of some other languages of the like,  
which I've never found, and gave up with the idea (of the probably too  
much big work) to attempt to create one, for this exact reason : this  
would be specific to me, to what I use. In this area, it is mandatory to  
have standards, otherwise, this is not as much interesting.

This is not only about checking for validity, this is also about  
expressing why it is so to others in way hey can trust and read.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-19 21:27         ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-20  6:21           ` Simon Wright
  2010-05-20  6:58             ` SPARK Yannick Duchêne (Hibou57)
  0 siblings, 1 reply; 61+ messages in thread
From: Simon Wright @ 2010-05-20  6:21 UTC (permalink / raw)


"Yannick Duchêne (Hibou57)" <yannick_duchene@yahoo.fr> writes:

>> The GPL SPARK 8.1.1 binary release doesn't run well under Mac OS X Snow
>> Leopard (problems with exception handling in 64-bit executables).
> Where does the SPARK executable makes use of exceptions ? I have read
> something asserting the SPARK system is it-self checked to be runtime
> exceptions free. Perhaps this is in an external dependency part.

Yes; the traceback is

#0  0x00007fff82610886 in __kill ()
#1  0x00007fff826b0eae in abort ()
#2  0x00000001000837ce in uw_init_context_1 ()
#3  0x00000001000838e8 in _Unwind_Backtrace ()
#4  0x00000001000c8279 in __gnat_backtrace ()
#5  0x00000001000c42dd in system__traceback__call_chain ()
#6  0x0000000100088259 in ada__exceptions__call_chain ()
#7  0x0000000100087a61 in ada__exceptions__exception_propagation__propagate_exceptionXn ()
#8  0x0000000100088548 in __gnat_raise_nodefer_with_msg ()
#9  0x00000001000885aa in __gnat_raise_exception ()
#10 0x00000001000b5b87 in system__file_io__open ()
#11 0x000000010009744e in ada__text_io__open ()
#12 0x0000000100023f2f in spark_io__open ()
#13 0x000000010002b7be in sparkmake__openorcreatefile.1909 ()
#14 0x000000010002c083 in _ada_sparkmake ()

so I probaby had some missing file. If the SPARK system itself s
excetion-free, and the RTS makes no use of exceptions internally, maybe
this is worth pursuing .. adds to the difficulties, though.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-20  6:21           ` SPARK Simon Wright
@ 2010-05-20  6:58             ` Yannick Duchêne (Hibou57)
  2010-05-20 21:51               ` SPARK Simon Wright
  0 siblings, 1 reply; 61+ messages in thread
From: Yannick Duchêne (Hibou57) @ 2010-05-20  6:58 UTC (permalink / raw)


Le Thu, 20 May 2010 08:21:43 +0200, Simon Wright <simon@pushface.org> a  
écrit:
> Yes; the traceback is
>
> #0  0x00007fff82610886 in __kill ()
> #1  0x00007fff826b0eae in abort ()
> #2  0x00000001000837ce in uw_init_context_1 ()
> #3  0x00000001000838e8 in _Unwind_Backtrace ()
> #4  0x00000001000c8279 in __gnat_backtrace ()
> #5  0x00000001000c42dd in system__traceback__call_chain ()
> #6  0x0000000100088259 in ada__exceptions__call_chain ()
> #7  0x0000000100087a61 in  
> ada__exceptions__exception_propagation__propagate_exceptionXn ()
> #8  0x0000000100088548 in __gnat_raise_nodefer_with_msg ()
> #9  0x00000001000885aa in __gnat_raise_exception ()
> #10 0x00000001000b5b87 in system__file_io__open ()
> #11 0x000000010009744e in ada__text_io__open ()
> #12 0x0000000100023f2f in spark_io__open ()
> #13 0x000000010002b7be in sparkmake__openorcreatefile.1909 ()
> #14 0x000000010002c083 in _ada_sparkmake ()
>
> so I probaby had some missing file. If the SPARK system itself s
> excetion-free, and the RTS makes no use of exceptions internally, maybe
> this is worth pursuing .. adds to the difficulties, though.
SPARK_IO is indeed implemented on top of Ada.Text_IO, which is not under  
control of the SPARK application.

Still wonder why Text_IO fails here, and surprisingly right at startup.  
Requires investigations.

Did you submit this issue to Praxis ? Mac is somewhat popular, pretty sure  
they will be interested in the case.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: SPARK
  2010-05-20  6:58             ` SPARK Yannick Duchêne (Hibou57)
@ 2010-05-20 21:51               ` Simon Wright
  0 siblings, 0 replies; 61+ messages in thread
From: Simon Wright @ 2010-05-20 21:51 UTC (permalink / raw)


"Yannick Duchêne (Hibou57)" <yannick_duchene@yahoo.fr> writes:

> Still wonder why Text_IO fails here, and surprisingly right at
> startup. Requires investigations.

The section that leads to the problem is

      SPARK_IO.Open (File         => Id,
                     Mode_Of_File => Mode,
                     Name_Length  => File.Length,
                     Name_Of_File => Str (File),
                     Form_Of_File => "",
                     Status       => Status);

      if Status = SPARK_IO.Name_Error then
         SPARK_IO.Create (File         => Id,
                          Name_Length  => File.Length,
                          Name_Of_File => Str (File),
                          Form_Of_File => "",
                          Status       => Status);
      end if;

and you can see why exceptions are involved 'under the hood'!

The progress messages from sparkmake were enough to allow me to create
the files concerned (foo.idx, foo.smf) using touch after which sparkmake
was able to update them.

One hopes that GNAT GPL 2010 will run on Snow Leopard! (and of course on
Leopard for those who felt no need to update).



^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2010-05-20 21:51 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-06 16:49 SPARK programmer
2001-08-07  7:04 ` SPARK Hambut
2001-08-07  7:18 ` SPARK Hambut
2001-08-07  8:37 ` SPARK Peter Amey
2001-08-07 14:42   ` SPARK McDoobie
2001-08-09 12:36   ` SPARK Peter Amey
2001-08-14  3:14   ` SPARK Prof Karl Kleine
2001-08-14 10:25     ` SPARK Rod Chapman
  -- strict thread matches above, loose matches on Subject: below --
2001-08-08  9:46 SPARK Soeren.Henssel-Rasmussen
2001-08-08 20:04 ` SPARK McDoobie
2004-08-18 23:46 timeouts Brian May
2004-08-19  3:40 ` timeouts Steve
2004-08-22  4:18   ` timeouts Brian May
2004-08-22 12:54     ` timeouts Jeff C,
2004-08-26  1:28       ` timeouts Brian May
2004-08-26 13:34         ` timeouts Steve
2004-08-26 14:02           ` timeouts Georg Bauhaus
2004-08-26 23:03             ` SPARK Brian May
2004-08-27 10:11               ` SPARK Georg Bauhaus
2009-06-10  9:47 SPARK Robert Matthews
2010-05-12 22:55 SPARK Yannick Duchêne (Hibou57)
2010-05-13  0:52 ` SPARK Yannick Duchêne (Hibou57)
2010-05-13  3:06 ` SPARK Yannick Duchêne (Hibou57)
2010-05-13  9:28   ` SPARK stefan-lucks
2010-05-13 16:48     ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 13:09       ` SPARK Peter C. Chapin
2010-05-14 22:55   ` SPARK Yannick Duchêne (Hibou57)
2010-05-13  4:00 ` SPARK Yannick Duchêne (Hibou57)
2010-05-13 16:54 ` SPARK Yannick Duchêne (Hibou57)
2010-05-13 17:15   ` SPARK Rod Chapman
2010-05-13 19:43     ` SPARK Yannick Duchêne (Hibou57)
2010-05-13 20:05       ` SPARK Rod Chapman
2010-05-13 21:43         ` SPARK Yannick Duchêne (Hibou57)
2010-05-14 14:47         ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  1:20 ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  4:15   ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  8:17     ` SPARK Phil Thornley
2010-05-14  9:32       ` SPARK Rod Chapman
2010-05-14 14:20       ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  3:07 ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  3:26   ` SPARK Yannick Duchêne (Hibou57)
2010-05-14  8:11   ` SPARK Phil Thornley
2010-05-14 14:28     ` SPARK Yannick Duchêne (Hibou57)
2010-05-14 21:45 ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 16:41 ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 18:00   ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 18:14   ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 19:08     ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 20:23       ` SPARK Yannick Duchêne (Hibou57)
2010-05-16 18:13         ` SPARK Peter C. Chapin
2010-05-17  0:59           ` SPARK Yannick Duchêne (Hibou57)
2010-05-16 18:17         ` SPARK Phil Thornley
2010-05-17  1:24           ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 18:43   ` SPARK Phil Clayton
2010-05-15 19:12     ` SPARK Yannick Duchêne (Hibou57)
2010-05-15 21:02       ` SPARK Phil Clayton
2010-05-15 22:48 ` SPARK Yannick Duchêne (Hibou57)
2010-05-16  1:48   ` SPARK Yannick Duchêne (Hibou57)
2010-05-16  1:53     ` SPARK Yannick Duchêne (Hibou57)
2010-05-16  5:28 ` SPARK Yannick Duchêne (Hibou57)
2010-05-18 18:01   ` SPARK Yannick Duchêne (Hibou57)
2010-05-19  8:09     ` SPARK Phil Thornley
2010-05-19 20:38       ` SPARK Simon Wright
2010-05-19 21:27         ` SPARK Yannick Duchêne (Hibou57)
2010-05-20  6:21           ` SPARK Simon Wright
2010-05-20  6:58             ` SPARK Yannick Duchêne (Hibou57)
2010-05-20 21:51               ` SPARK Simon Wright
2010-05-19 21:35       ` SPARK Yannick Duchêne (Hibou57)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox