From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 1094ba,9f0bf354542633fd X-Google-Attributes: gid1094ba,public X-Google-Thread: 103376,d901a50a5adfec3c X-Google-Attributes: gid103376,public From: "Ian St. John" Subject: Re: Fortran or Ada? Date: 1998/10/05 Message-ID: #1/1 X-Deja-AN: 398102940 References: <3617AA49.340A5899@icon.fi> <6v9s4t$egn$1@ys.ifremer.fr> <3618dc33.0@news.passport.ca> <6vbhhc$5kj$1@nnrp1.dejanews.com> X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3110.3 X-Complaints-To: abuse@sprint.ca X-Trace: HME2.newscontent-01.sprint.ca 907634488 209.103.52.241 (Mon, 05 Oct 1998 20:41:28 EDT) Organization: Sprint Canada Inc. NNTP-Posting-Date: Mon, 05 Oct 1998 20:41:28 EDT Newsgroups: comp.lang.fortran,comp.lang.ada Date: 1998-10-05T00:00:00+00:00 List-Id: dewarr@my-dejanews.com wrote in message <6vbhhc$5kj$1@nnrp1.dejanews.com>... > >Well you can of course make your "HO" valid by simply defining any testing >that results in software with bugs as not meeting your criteria for testing >"well". But in practice it would surprise me if anyone these days would >propose that testing alone is sufficient for guaranteeing freedom from >failure in software. This is hardly controversial, indeed what would be >controversial at this stage is precisely this view (that testing *could* >be sufficient). Testing by itself can never guarantee correct code. The testing is just to determine syntax errors, oversights, etc. The real software engineering comes in designing the software, and evaluating what limits you put on the possible model. If you have designed the code correctly, and tested well, first on the component level, up to the full integration level, there are very few ways for software to fail that do not come to light. The gain in reliability continues with each launch or test. On the other hand, the hardware is new, on each launch. Reliablility for each part times, accumulated over the number of critical parts, results in hardware being a better assumption in failure handling. Nothing I have heard says that the design team failed in correctly coding for the *job it was intended for*. An overflow in the value, would have had to be a hardware error, on the *system it was designed for*. The failure ( I repeat ) was in management for not doing a full re-evaluation of the software limits, and assumptions.