From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,744136b4fae1ff3e X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-03-10 14:33:38 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!news.tele.dk!news.tele.dk!small.news.tele.dk!newsfeed1.bredband.com!bredband!uio.no!news-FFM2.ecrc.net!news.iks-jena.de!not-for-mail From: Lutz Donnerhacke Newsgroups: comp.lang.ada Subject: Re: [Spark] Converting Arrays Date: Mon, 10 Mar 2003 22:33:38 +0000 (UTC) Organization: IKS GmbH Jena Message-ID: References: <1c6ba.19009$Oz1.770240@bgtnsc05-news.ops.worldnet.att.net> NNTP-Posting-Host: belenus.iks-jena.de X-Trace: branwen.iks-jena.de 1047335618 24316 217.17.192.34 (10 Mar 2003 22:33:38 GMT) X-Complaints-To: usenet@iks-jena.de NNTP-Posting-Date: Mon, 10 Mar 2003 22:33:38 +0000 (UTC) User-Agent: slrn/0.9.7.4 (Linux) Xref: archiver1.google.com comp.lang.ada:35155 Date: 2003-03-10T22:33:38+00:00 List-Id: * James S. Rogers wrote: > "Lutz Donnerhacke" wrote in message >> I run into a difficult problem (for me): > > change the initialization of "path" as follows: > > path := (others => ASCII.NUL); path := (1 .. s'Length => s, others => ASCII.NUL); would be the approbriate Ada construct. But I do not talk about Ada. I talk about the Spark subset of Ada which refuses such construct because: - it's a untyped aggragate (illegal in Spark). - constains non static components (slicing in illegal in Spark). - assignes to non statically known bounds. (illegal in Spark). > Note that another problem looms in your code. > The "for" loop will attempt to access one element > beyond the end of string "s". Yep. Spark would found this later. But currently it refuses the construct as a whole. What does not compile, is already erronous. ;-) > It appears that you are trying to copy s into path > while skipping the first element of s. This can be done > more directly and correctly with array slices: Slicing is illegal in Spark. A possible solution is to assign the values in two loops. But Spark refuses this construct, too, because the data flow analysis handle arrays as a single variable and therefore compains about uninitialized values, reuse of uninitialized values and contradiction to the predicted information flow. > You will need to decide how to handle the case when > path is shorter than s. Very simple: I have a precondition on the specification claiming that s is really shorter than path, so on every call to this function, Spark proofs that this procedure will not fail due to unexpected parameters. I fear, that I have to use a constraint array type for out variables. :-(