From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c0f035b936128b6c X-Google-Attributes: gid103376,public X-Google-Thread: 1014db,c0f035b936128b6c X-Google-Attributes: gid1014db,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Ada95 to ANSI_C converter Date: 1997/04/04 Message-ID: #1/1 X-Deja-AN: 230764643 References: <5htg0a$v8v$1@news.nyu.edu> Organization: New York University Newsgroups: comp.lang.ada,comp.lang.c Date: 1997-04-04T00:00:00+00:00 List-Id: Keith says <<(I *like* nasty test cases)>> The trouble with nasty test cases is that they can sometimes result in implementors wasting huge amounts of time getting things right that are of no possible concern to users. The old ACVC suite was full of such things, and it is still quite often the case that we make a change and it affects none of our user programs in our regression suite, but it causes an obscure bug in some ACVC test. Equally, it is often the case that a change blows away huge numbers of user tests, but the ACVC hums merrily on with no errors. Anecdotally (we really should collect data on this, we have the raw data), we have the impression that the Ada 95 tests tend to track user programs more closely. They are certainly much less full of "nasty test cases".