From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: dewar@cs.nyu.edu (Robert Dewar) Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/16 Message-ID: #1/1 X-Deja-AN: 148091075 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> <3160EFBF.BF9@lfwc.lockheed.com> <31729101.3F83@lfwc.lockheed.com> <31735D9E.7B1@lmtas.lmco.com> organization: Courant Institute of Mathematical Sciences newsgroups: comp.lang.ada Date: 1996-04-16T00:00:00+00:00 List-Id: Ken said "Robert Dewar wrote: > > Ken said > > "> A comparison would be if 20 different vendors wrote completely different > > F22 software, with different cockpit interfaces. > > Twenty sounds a little low, but that's close enough...." > > Are you really saying that the ENTIRE F22 software is duplicated by 20 > vendors? Nope. Of course, that's not what you asked. See original quote above. :) However, we've certainly had multiple vendors write the same software (N-version programming) on previous programs. I don't like that approach myself, but that's a different discussion. Of course, it doesn't matter whether or not twenty vendors wrote the same software, or different software going into the same system, with respect to the question you asked." OK, that makes sense -- I couldn't imagine that the F22 was indeed like the compiler field, where we have multiple vendors generating complete Ada systems for the same target. That IS exactly what I asked, sorry if I was not clear, and it matters very much. We were talking about taking failures and creating industry wide regression suites. This is very hard to do across more than one vendor.. For example, a lot of our tests in our suite check o text of error messages generated o performance and behavior of GNAT specific features o particular GNAT choices for implementation dependent behavior None of these are useful multi-vendor tests, at least not without a lot of work, which would only recognize some of them. Other tests in our suite are white-box (path coverage) type tests baed on the specific algorithms used by GNAT. They are runnable, but not especially useful, on other compilers. Nevertheless there are a lot of tests in regression suites that could be generally used. If we put a test in our suite that seems clearly related to ACVC-type conformance testing, I send it along to the ACVC folks for inclusion. As an example of cross-vendor use of test suites, one of the important aspects of our agreement with DEC is that we wlil have access to the DEC test suite. It will take a lot of work to adapt that suite, and part of the reason that it is feasible is that we are committed to a very high degree of compatibility with Dec Ada.