From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID, LOTS_OF_MONEY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,d901a50a5adfec3c X-Google-Attributes: gid103376,public X-Google-Thread: 1094ba,9f0bf354542633fd X-Google-Attributes: gid1094ba,public From: "Ian St. John" Subject: Re: Fortran or Ada? Date: 1998/10/06 Message-ID: <361a94ca.0@news.passport.ca>#1/1 X-Deja-AN: 398912470 References: <3617AA49.340A5899@icon.fi> <6v9s4t$egn$1@ys.ifremer.fr> <3618dc33.0@news.passport.ca> <6vbhhc$5kj$1@nnrp1.dejanews.com> <6vdfq4$p1$1@nnrp1.dejanews.com> X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3110.3 X-Trace: 6 Oct 1998 18:08:10 +0400, 199.166.20.202 Organization: Passport.Ca Newsgroups: comp.lang.fortran,comp.lang.ada Date: 1998-10-06T00:00:00+00:00 List-Id: dewarr@my-dejanews.com wrote in message <6vdfq4$p1$1@nnrp1.dejanews.com>... >In article , > "Ian St. John" wrote: >> >> Testing by itself can never guarantee correct code. The testing is just to >> determine syntax errors, oversights, etc. > >That's a bit strong, there are definitely cases where testing can be >exhaustive, e.g. in checking out a sqrt routine for IEEE short form >arithmetic. Indeed it is almost practical to do exhaustive testing on >long format division (which would have saved Intel many millions of >dollars :-) Exhaustive testing would have taken way too long. And if you test exhaustively for real*4, does that mean that real*8 or real*10 values are correct? Much faster to verify the lookup table as having been copied over correctly. That had already been 'exhaustively tested'. Nobody looked. That is when problems creep in. Management oversights, decisions, and plain lack of common sense. 'Check that it's plugged in *first*'. Like the Dallas airport. A two hour session with LISP would have told them that the queuing for the luggage carts wouldn't work. ( Test reported in IEEE spectrum or computer mag analyzing failure ). The multi-million dollar fullblown simulator that was to verify the design of the program was still being 'debugged' long after the airport went into operation. It was scheduled to be finished a year after it became totally irrelevant. Can you tell me why they built a multi-million dollar simulator that wouldn't give them the answer in time to be of any use? Duuuuhhhhhh. Guess someone wanted a big budget project. Another example of why testing will never guarantee correct coding might be the Win95 midnight bug. It bumps the date by two days if you happen to shut down on a particular second before midnight. It took Microsoft days to confirm the bug, even having good evidence it was there, because it had to be one specific second, and that changed with the cpu speed, etc. Complicated systems just can't be 'exhaustively' tested. Each new subsystem in Win95, or Vxd, or DLL makes for that many more combinations of interactions. You get a "combinatorial explosion". Brute force doesn't work. It has to be a combination of design, mathematics, testing, and management. How does this module affect others? Will any new timing delays affect latency in this realtime module. Etc. And you may never get that 'last bug'. People coded it, and some programmers are better than others. So you design it with robust operation in mind. Then keep squashing them, as they come out, until you just can't find any more. You have Bill analyze Fred's code, Harry analyze Bill's code, and Fred analyze Harry's code. Change the assignments, etc. to keep everyone aware of the big picture, and how implementation details may affect the likelyhood of specific failures. Have group discussions, where details can be brought to light. Someone may have experience ( like LISP programming ) that can simplify the testing, or point to possible failure modes not currently in discussion. **** NOTE: These opinions are my own. I am old enough to make up my own mind, so there.. ****