From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_20,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,d901a50a5adfec3c X-Google-Attributes: gid103376,public X-Google-Thread: 1094ba,9f0bf354542633fd X-Google-Attributes: gid1094ba,public From: "Ian St. John" Subject: Re: Fortran or Ada? Date: 1998/10/07 Message-ID: <361bae57.0@news.passport.ca>#1/1 X-Deja-AN: 398930205 References: <3618dc33.0@news.passport.ca> <6vcj6f$ak7$1@ys.ifremer.fr> X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3110.3 X-Trace: 7 Oct 1998 14:09:27 +0400, 199.166.20.202 Organization: Passport.Ca Newsgroups: comp.lang.fortran,comp.lang.ada Date: 1998-10-07T00:00:00+00:00 List-Id: Michel OLAGNON wrote in message <6vcj6f$ak7$1@ys.ifremer.fr>... >>> >>>The designers failed, IMHO, to note that even if hardware might >>>be more likely to be wrong than software at time T0, over the whole >>>expected service life of the system, it was software that had the highest >>>probability to end up wrong. >>> >> >> >>IMHO, well tested software doesn't fail. > >But, IMHO, such well tested software doesn't exist. By that light, 'well tested hardware' doesn't either. Maybe you aren't paying enough attention to paying for good software engineers. You seem rather 'biased'. Modern systems require attention to both hardware and software engineering. Each piece of hardware has the potential for 'infant mortality', or tolerance sloppiness. This is exacerbated in complex systems by the reliablity factor being the resulting combination of many individual MTBF values. On the other hand, software, in a system that has not changed, gains reliability over time, as bugs are shaken out. This is 'well tested' software. *Planning* for errors should assume hardware problems in general, for this reason. However, for good software engineering practice, generating software for a different system requires all software to go back to the 'untested' level for re-evaluation. The software for Arianne 5 was 'untested'. > >The point, IMHO, is that the software was *useless* for Ariane 5, recognized >so by the reviewers, and yet kept because of ``commonality reasons'', which, >IMHO again, is a polite way to say ``lack of thought''. Although I could not >make it out again clearly from the report, I remember that the launch >procedure was changed at some time for Ariane 4, and that the software >was also *useless* for it, but was kept for a similar reason: If it >ain't broken, why change it ? Actually, it would think it is just a 'cover' for the 'software re-use' theory. Like CASE tools, and dozens of others schemes before it, the point is to allow for quick software development with no intelligent thought. Like putting a Delta III upper stage on a Saturn 1B stage, with a couple of SRB's on the side. After all, each component is 'well tested'. Right? I would diffidently point out the mass of scrap metal over there as a good reason to change it. You are correct in that a change in the systems *has* to be cause for re-evaluation of the software. But you are wrong in ascribing this to 'poorly tested software'. It would be like crying because the F15 flight simulator software doesn't work very well running your car. I would no more expect Arianne 4 software to be 'well tested' in an Arianne 5, than in a Saturn V, or even in a modified Arianne 4. Any more than I would expect a hardware engineer to increase tankage size by ten percent without evaluating the effect on stuctural integrity, resizing of the engines, fuel flow limits, etc. >This is pure speculation. It might have been a software error or a hardware >error, no one can tell. There are few software errors that can put the wrong data into a variable. They generally end up being weeded out in early testing because they tend to be 'catastrophic failures'. I call this a hardware error, because as I understand it, the sensor/converter on the Arianne 4 could not generate a valid value large enough to overflow the variable, in it's flight regime. Arianne 5, OTOH, was guaranteed to do so. Hardware failure *are* more likely under heat and stress. And, it was a gamble to leave the software running, rather than have it disabled after launch. A valid decision, for the most part. Running the same software with Arianne 5 and no re-evaluation was just plain stupid. There is little defence against real idiocy. > But even if it had been a hardware error, my experience >is that it would be very likely to have happenned *after* T-5 seconds rather >than before (hardware errors happen with vibrations, heat, ...), i.e. at a >time when the computations were no longer needed, that is, IMHO, when the >software error of making useless computations had already happenned. I will give you this. The software was not needed once liftoff had been achieved. It was retained because it was felt that it could do no harm, and it would be cheaper to leave it in, and running. This was bad software engineering. Primarily driven by costs, so it was a judgement call. Point is, it was a good call for Arianne 4. Maybe that wreckage will make the bean counters rethink their priorities. They violated my rule. Never cut corners on the prototype. Once you have all of the factors in a working system, then you can start reducing costs, with good data on *what* you can trim.