From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!feeder.eternal-september.org!nntp-feed.chiark.greenend.org.uk!ewrotcd!newsfeed.xs3.de!news.jacob-sparre.dk!franka.jacob-sparre.dk!pnx.dk!.POSTED.rrsoftware.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: Ann: Gcc/GNAT 9.1 released - Test Results Date: Fri, 10 May 2019 17:32:21 -0500 Organization: JSA Research & Innovation Message-ID: References: <7d10dac8-f871-4cf9-b57f-aa9a114d2650@googlegroups.com> Injection-Date: Fri, 10 May 2019 22:32:21 -0000 (UTC) Injection-Info: franka.jacob-sparre.dk; posting-host="rrsoftware.com:24.196.82.226"; logging-data="4116"; mail-complaints-to="news@jacob-sparre.dk" X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.7246 Xref: reader01.eternal-september.org comp.lang.ada:56286 Date: 2019-05-10T17:32:21-05:00 List-Id: "Simon Wright" wrote in message news:lyk1f0vw11.fsf@pushface.org... ... > Quite a few of the unsupported tests are so because they deal with the > consequences of changing the source code and rebuilding. The ACATS > source for those tests contains multiple copies of some units, which > results in a gnatchop failure. It'd require a lot of work to fix, and > for what is essentially an IVP (installation verification procedure) > seems like overkill. Note that the (relatively new) ACATS grading tool checks process failures, and the GNAT scripting tool available in the submitted tests area handles most of these tests properly. I note that my experience is that there are a lot of B-Tests (on *every* compiler that I've checked) that are either in a grey area as to whether they have passed or are outright failing. I suspect that's because errors creep into supposedly "known good results", and also because without any formal verifications it is really easy to decide not to worry about a dubious result. (It's also possible that my test procedure didn't exactly match the one typically used by the vendor, so some results might differ). Randy.