From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: eachus@spectre.mitre.org (Robert I. Eachus) Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/11 Message-ID: #1/1 X-Deja-AN: 146998899 references: <00001a73+00002c20@msn.com> organization: The Mitre Corp., Bedford, MA. newsgroups: comp.lang.ada Date: 1996-04-11T00:00:00+00:00 List-Id: (I don't know if this discussion is being followed by many non-participants, but it certainly is on topic.) I think Ken is missing the point Robert is making--not that ACVC type testing of Ada compilers is in need of improvement, but that any more emphasis on testing _instead_of_other_means_of_improving_quality_ would result in decreased compiler quality. I would be surprised if ACT performs less than a MIPS-century of ACVC and regression testing each year. Also, from experience, I suspect that tests that execise the code generator (i.e. other than ACVC B or L tests) are run much more often that those tests which are expected to produce error messages. (Even if you only changed the front end, you want to test against most code generators, but running the B-tests once is probably sufficient.) I doubt that the F-22 program tests at this level. Speaking from experience, the testing on large programs like this are in the MIPS-month per year range if we are lucky. That is still one HUGE lump of testing. However, the F-22 program probably spends much more time on formal proofs of correctness--not for the whole system, but for particular algorithms, like the control laws. In the compiler business, a lot of this work is done by tools instead. I don't know how much of this ACT does, but I know that, for example, DDC-I uses VDM tools to formally prove that components meet requirements. This approach is very viable on compilers because the effort put into making assertions about, say parse trees, is used in testing all code that modifies those trees. (And in any good Ada compiler, that includes a lot, usually including but not limited to, parser, semantic analysis, high and low level optimizers, and hardware specific modules.) -- Robert I. Eachus with Standard_Disclaimer; use Standard_Disclaimer; function Message (Text: in Clever_Ideas) return Better_Ideas is...