From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,7b0188e023be40b6 X-Google-Attributes: gid103376,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!feeder.news-service.com!cyclone02.ams2.highwinds-media.com!news.highwinds-media.com!npeersf01.ams.highwinds-media.com!newsfe18.ams2.POSTED!7564ea0f!not-for-mail Message-ID: <7VPFzoBF4IzJJwYX@diphi.demon.co.uk> From: JP Thornley Newsgroups: comp.lang.ada Subject: Re: SPARK Examiner -- visibility of Ada.Strings.Unbounded (and probably lots of other packages) References: <8c753bd7-3df6-418a-8cd7-342af6eadeff@g38g2000yqd.googlegroups.com> <49cb9cd8$0$31344$9b4e6d93@newsspool4.arcor-online.net> MIME-Version: 1.0 Content-Type: text/plain;charset=us-ascii;format=flowed User-Agent: Turnpike/6.07-S (<5L3liiwxassbyQFOQ$acArbwQr>) NNTP-Posting-Host: 80.177.171.182 X-Complaints-To: abuse@demon.net X-Trace: newsfe18.ams2 1238142477 80.177.171.182 (Fri, 27 Mar 2009 08:27:57 UTC) NNTP-Posting-Date: Fri, 27 Mar 2009 08:27:57 UTC Date: Fri, 27 Mar 2009 08:27:49 +0000 Xref: g2news2.google.com comp.lang.ada:5329 Date: 2009-03-27T08:27:49+00:00 List-Id: In article , Tim Rowe writes >Ok, I'm still not getting importing to work. > >I have a package specification Foo.ads, which pure SPARK -- Examiner is >happy with it. > >I have a package specification Bar.ads, in the same directory, which begins: > >with Foo; >--# inherit Foo; > >(Which looks to me very like the example on p59 of the Spark95 manual >that comes with the evaluation version of the examiner), but Examiner >complains: >Line > 1 with Foo; > ^1 >--- ( 1) Warning : 1: The identifier Foo is either >undeclared or not > visible at this point. > > 2 --# inherit Foo; > ^2 >*** ( 2) Semantic Error : 1: The identifier Foo is either >undeclared or not > visible at this point. > >I'm still missing something obvious, aren't I? It looks like you aren't telling the Examiner to look at Foo first. Check the report file that is produced (spark.rep) which lists all the files relevant to the run of the Examiner. This may say that it couldn't find the specification of Foo. The Examiner does not assume any file naming convention and doesn't go and look for files based on any expected file name (in the same way that Your Favourite Compiler (TM) does). If you are just trying out some ideas in SPARK and don't want to compile the code then simply put everything into one file and examine that. If you want to keep separate files then you need to tell the Examiner to look at all the relevant files - in this case use the command: spark foo.ads,bar.ads If you have more than about four files this isn't workable, so you can use a 'metafile' which is simply a list of files to be examined. Define foobar.smf as: foo.ads bar.ads then use the command: spark @foobar This is OK as long as you are happy to examine the complete set of files every time. For larger systems you need an index file, which tells the Examiner where to find any of the files it may need. In this case define foobar.idx as: foo specification is in foo.ads bar specification is in bar.ads then give this as the index_file qualifier - eg to examine the body of bar: spark /index=foobar bar.adb and the Examiner uses the index file to find first the spec of bar and then, because of the inherit, the spec of foo. (Then for bigger systems you can define superindex files as well .... ) Cheers, Phil -- JP Thornley