From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,2afac1a4161c7f35 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: who owns the code? was Re: Distinguishing type names from other identifiers Date: 1998/01/19 Message-ID: #1/1 X-Deja-AN: 317379708 References: <884736089.2104295427@dejanews.com> <199801191458.PAA28408@basement.replay.com> X-Complaints-To: usenet@news.nyu.edu X-Trace: news.nyu.edu 885227926 19821 (None) 128.122.140.58 Organization: New York University Newsgroups: comp.lang.ada Date: 1998-01-19T00:00:00+00:00 List-Id: Jeff Carter said <> Actually this would be foreign to me. Sure major changes do get discussed in advance, but for minor fixes, and even significant additions, we use entirely another scheme, which is to rely on a very extensive regression suite (several million lines of code, in 25,000 files in 4,500 directories). No one is allowed to make any changes without running this suite on their changes and making sure that no regressions are caused. We have found this suite MUCH more effective than another pair of eyes, though another pair of eyes is always valuable for dealing with issues other than functional correctness (e.g. efficiency and style). There are two factors in the GNAT project which are relevant here: 1. We often need to fix problems for customers with a critical problem in a very short time scale, while retaining quality control. Requiring review cycles and discussions of any modification would be inconsistent with the required rapid turn around time. 2. Compilers are a very nice case for testing. It is possible to build very effective test suites (we use our own internal test suite I referred to before, the ACVC test suite, and the DEC test suite, and between them we have found them very reliable in avoiding mistakes in changes) Of course we get MANY false attempts that do NOT get through the mailserver (I don't know what percentage of mail server attempts, i.e. attempts ot run our regression suite, fail, but it is high, I would guess 75%. It is amazing how often a change that looks completely correct, and which any additional pair of eyes would agree is correct, is wrong because of some subtle interaction caught by some weird test in the suite!) The way our mail server works is that you prepare your change as a patch, and then send it to a special mailserver address, choosing which of several machines to run on (normally any old machine will do, but sometimes if you are making a change that might be machine dependent, you can run it on a particular machine, or perhaps even more than one machine). Then a few hours later you get a report, and you can't check things in unless the report is clean THen as a backup to this process, a message goes out to the group noting the revision history of the change that has been made, and anyone who usefully can will check the change to make sure it makes sense. Furthermore than night the regression suite will be rerun (which catches the very rare cases of conflicting changes -- we have not seen one of these in recent memory, certainly not for a year), and also the ACVC and selected DEC test suite tests are run. We have found that this scheme works very well for us. Obviously it is not necessarily applicable to other environments. In particular it is often very difficult to construct a reliable test suite for many applications. Robert Dewar Ada Core Technologies