From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Simon Wright Newsgroups: comp.lang.ada Subject: Re: Formal Subprogram Access Date: Tue, 13 Feb 2018 12:24:08 +0000 Organization: A noiseless patient Spider Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain Injection-Info: reader02.eternal-september.org; posting-host="0b852e40190820e0f08a03d313e97d15"; logging-data="28929"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+v2mp8bUF80SGQ66G1b11Ov6jaRz1TfiQ=" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (darwin) Cancel-Lock: sha1:mNUYZowUygeXqyDAKsEX8NjUqF8= sha1:SvMue6KQBo3HDM9bk6RnmUpOJ7Q= Xref: reader02.eternal-september.org comp.lang.ada:50418 Date: 2018-02-13T12:24:08+00:00 List-Id: "Randy Brukardt" writes: > There's no such rule in the RM. As Christoph noted, a generic formal > subprogram has convention Ada, 'Access is allowed. If you pass an > attribute to it, you have to wrap it in a real subprogram for this > reason. (There is a lot of code specifically to do this in Janus/Ada, > it probably never has been tested outside of a single ACATS test -- I > don't think anyone ever has had a reason to pass an attribute like > Succ as a formal subprogram.) In GNAT, I've passed 'Image, 'Value as actuals for corresponding formal subprograms. generic type Checked_Type is private; Checked_Type_Name : String; with function "=" (L, R : Checked_Type) return Boolean is <>; with function Value (S : String) return Checked_Type is <>; with function Image (V : Checked_Type) return String is <>; package Check_Passed_Value is Which ACATS test should have caught that (or any other attribute, of course)? I see that the GCC-based tests that I've been working on at https://github.com/simonjwright/ACATS wouldn't necessarily catch all the errors expected in the B (and L?) tests: for example, BC70009 reports 4 compilation errors, which counts as a success because of course B tests are expected to fail, but doesn't check that all the expected errors have in fact been caught. (In this case, 4 was correct.) I should check the grading tool, though I'm not sure that complete conformance checks are justified in this context - it's more to ensure that some feature hasn't been broken.