From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: dewar@cs.nyu.edu (Robert Dewar) Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/10 Message-ID: #1/1 X-Deja-AN: 146888592 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> <3160EFBF.BF9@lfwc.lockheed.com> organization: Courant Institute of Mathematical Sciences newsgroups: comp.lang.ada Date: 1996-04-10T00:00:00+00:00 List-Id: Ken said "Again, a difference in philosophy: In my domain, _not_ having a regression test for every bug found is out of the question." No, not a difference in philosophies, but rather a difference in domains. Certainly an individual vendor can build a complete set of regression tests, based on every bug found, but I don't see how that could practically be done across compilers. A comparison would be if 20 different vendors wrote completely different F22 software, with different cockpit interfaces. Now trying to build a regression suite corresponding to all bugs found in all 20 versions would be very much more difficult. This still isn't a valid comparison, because it does not capture the fact that Ada compilers are multi-target, so many buts are platform specific, so it is more as if 20 companies built 20 programs that could be retargetted to any aircraft in the sky. I don't think that makes any sense at all -- as I say, the domains are VERY different, and trying to apply the compiler model to the F22 makes as little sense as trying to apply the F22 model to a compiler!