comp.lang.ada
 help / color / mirror / Atom feed
From: JP Thornley <jpt@diphi.demon.co.uk>
Subject: Re: SPARK Examiner -- visibility of Ada.Strings.Unbounded (and probably lots of other packages)
Date: Fri, 27 Mar 2009 08:27:49 +0000
Date: 2009-03-27T08:27:49+00:00	[thread overview]
Message-ID: <7VPFzoBF4IzJJwYX@diphi.demon.co.uk> (raw)
In-Reply-To: KfadnR5dBtNhTlbUnZ2dnUVZ8gqWnZ2d@posted.plusnet

In article <KfadnR5dBtNhTlbUnZ2dnUVZ8gqWnZ2d@posted.plusnet>, Tim Rowe 
<spamtrap@tgrowe.plus.net> writes
>Ok, I'm still not getting importing to work.
>
>I have a package specification Foo.ads, which pure SPARK -- Examiner is 
>happy with it.
>
>I have a package specification Bar.ads, in the same directory, which begins:
>
>with Foo;
>--# inherit Foo;
>
>(Which looks to me very like the example on p59 of the Spark95 manual 
>that comes with the evaluation version of the examiner), but Examiner 
>complains:
>Line
>   1  with Foo;
>           ^1
>--- (  1)  Warning           :  1: The identifier Foo is either 
>undeclared or not
>           visible at this point.
>
>   2  --# inherit Foo;
>                  ^2
>*** (  2)  Semantic Error    :  1: The identifier Foo is either 
>undeclared or not
>           visible at this point.
>
>I'm still missing something obvious, aren't I?

It looks like you aren't telling the Examiner to look at Foo first.
Check the report file that is produced (spark.rep) which lists all the 
files relevant to the run of the Examiner.  This may say that it 
couldn't find the specification of Foo.

The Examiner does not assume any file naming convention and doesn't go 
and look for files based on any expected file name (in the same way that 
Your Favourite Compiler (TM) does).

If you are just trying out some ideas in SPARK and don't want to compile 
the code then simply put everything into one file and examine that.

If you want to keep separate files then you need to tell the Examiner to 
look at all the relevant files - in this case use the command:
spark foo.ads,bar.ads

If you have more than about four files this isn't workable, so you can 
use a 'metafile' which is simply a list of files to be examined. Define 
foobar.smf as:
foo.ads
bar.ads

then use the command:
spark @foobar

This is OK as long as you are happy to examine the complete set of files 
every time.  For larger systems you need an index file, which tells the 
Examiner where to find any of the files it may need.  In this case 
define foobar.idx as:
foo specification is in foo.ads
bar specification is in bar.ads

then give this as the index_file qualifier - eg to examine the body of 
bar:
spark /index=foobar bar.adb

and the Examiner uses the index file to find first the spec of bar and 
then, because of the inherit, the spec of foo.

(Then for bigger systems you can define superindex files as well .... )

Cheers,

Phil

-- 
JP Thornley



  reply	other threads:[~2009-03-27  8:27 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-26 14:50 SPARK Examiner -- visibility of Ada.Strings.Unbounded (and probably lots of other packages) Tim Rowe
2009-03-26 15:13 ` Ludovic Brenta
2009-03-26 15:18   ` Georg Bauhaus
2009-03-26 15:35     ` roderick.chapman
2009-03-26 17:06       ` Tim Rowe
2009-03-26 19:13         ` Tim Rowe
2009-03-27  8:27           ` JP Thornley [this message]
2009-03-27  8:34             ` roderick.chapman
2009-03-27 16:03               ` Tim Rowe
2009-03-26 15:40     ` roderick.chapman
2009-03-26 16:15   ` Tim Rowe
2009-03-26 16:26     ` roderick.chapman
2009-03-26 16:30     ` JP Thornley
2009-03-26 16:51     ` roderick.chapman
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox