From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,a48e5b99425d742a X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,a48e5b99425d742a X-Google-Attributes: gid103376,public X-Google-Thread: f43e6,a48e5b99425d742a X-Google-Attributes: gidf43e6,public X-Google-Thread: 1108a1,5da92b52f6784b63 X-Google-Attributes: gid1108a1,public From: Ken Garlington Subject: Re: Trust but verify (was Re: Papers on the Ariane-5 crash and Design by Contract Date: 1997/03/27 Message-ID: <333AB895.1459@lmtas.lmco.com> X-Deja-AN: 228851696 References: <332B5495.167EB0E7@eiffel.com> <5giu3p$beb$1@news.irisa.fr> <332ED8AB.21E7@lmtas.lmco.com> <5go8nk$ngf$1@news.irisa.fr> <3332C049.2A0F@lmtas.lmco.com> <5gue93$2md$3@news.irisa.fr> <33383A6F.3A5A@lmtas.lmco.com> <5ha171$dck@flood.weeg.uiowa.edu> <3338AA0A.7A79CB24@eiffel.com> Organization: Lockheed Martin Tactical Aircraft Systems Newsgroups: comp.lang.eiffel,comp.object,comp.software-eng,comp.lang.ada Date: 1997-03-27T00:00:00+00:00 List-Id: Bertrand Meyer wrote: > > Robert S. White wrote: > > > So don't think it is just him that has been put off by people > > saying that "Design By Contract with the Eiffel language" > > would have (alone - without simulation or HWIL) testing) > > prevented the [Ariane] 5 disaster. > > In this particular case there is good reason to think that good > use of Design by Contract would in fact have evidenced the bug > early in the development cycle, long before any simulation or > testing. I think anyone that has practiced the method will agree. > To repeat the basic point of our (Jezequel's and mine) paper: > do not reuse without a contract. > > But of course this is not a reason to say there should not be > simulation, testing, inspections, independent reviews etc. etc. > I don't recall anyone even coming close to suggesting this in the > present thread. And our paper certainly does not say anything of > the sort. Unfortunately, it does. To quote: "Does this mean that the crash would automatically have been avoided had the mission used a language and method supporting built-in assertions and Design by Contract? Although it is always risky to draw such after-the-fact conclusions, the answer is probably yes:" The flaw in this statement is essentially two-fold: What is said in support of this argument is shaky, and just as importantly what is NOT said in the paper is flawed. Taking the supporting arguments first: 1. "Assertions (preconditions and postconditions in particular) can be automatically turned on during testing, through a simple compiler option. The error might have been caught then." Catching this error via automated testing, in the case of Ariane 5, depends upon two key aspects: a. That the proper data is available to trigger the exception during test. This was not available in the case of the Ariane 5 (see the final report). b. That the test would have been executed in the proper manner. The final report states quite clearly that proper testing was not performed. Thus, *in the context of the Ariane-5 development*, this argument fails. 2. "Assertions can remain turned on during execution, triggering an exception if violated. Given the performance constraints on such a mission, however, this would probably not have been the case." There are many counter-arguments to this point. However, since you acknowledge explicitly that it is inapplicable to this case, it fails inherently. 3. "But most importantly the assertions are a prime component of the software and its documentation ("short form", produced automatically by tools). In an environment such as that of Ariane where there is so much emphasis on quality control and thorough validation of everything, they would be the QA team's primary focus of attention. Any team worth its salt would have checked systematically that every call satisfies the precondition. That would have immediately revealed that the Ariane 5 calling software did not meet the expectation of the Ariane 4 routines that it called." a. This implies that the QA team would have had proper information to do such checking. As noted above, the final report clearly states that they would not have such information. The most thorough manual validation will fail in the face of incomplete data. This alone invalidates this claim. b. It also implies that the QA team would have done the check. Putting aside the argument as to whether the Ariane 5 team was "worth its salt", the report makes it clear that there was NOT an emphasis on "thorough validation of everything." Therefore, the assumption made in this point is in error, further invalidating the claim. c. There is, of course, the issue of scalability. Can a QA team check that every call satisfies every assertion manually (remember that automated testing is described in a seperate argument), for a system of this size? No evidence is given to support this claim. This further weakens an already dead claim. There is also a general argument for all of these points. Given that Design by Contract/Eiffel makes it *possible* to specify the assertion, would the development team have, in fact, specified the assertion? The final report clearly indicates that the team felt the assertion could not be violated in practice, and that this analysis was agreed to at several management levels. No evidence is given that, in this context, the assertion would have been written by *this* team. I suspect most development teams do not introduce assertions they consider impossible to trigger! As the paper itself notes: "If you have proved that a condition cannot happen, you are entitled not to check for it. If every program checked for all possible and impossible events, no useful instruction would ever get executed!" By extension, no useful program would ever get *coded*, even if the extraneous assertions would be suppressed at run-time, if some decisions were not made about what assertions to include. (There are several other general arguments related to the use of assertions, of course, but advancing such arguments seems unnecessary at this point.) So much for the explicit arguments. Now, for what the paper does NOT say. You state that "...there should... be simulation, testing, inspections, independent reviews etc...." and go on to note that the paper does not *deny* this fact. This, however, is not the issue. The issue is, does Design by Contract *require* such activities, which everyone seems to agree are in fact required? Note, again, that the statement is made that: "Does this mean that the crash would automatically have been avoided had the mission used a language and method supporting built-in assertions and Design by Contract? Although it is always risky to draw such after-the-fact conclusions, the answer is probably yes:" This says that the crash would probably have been avoided if Design by Contract/Eiffel (DBC) has been used. It does *not* say that the crash would have been avoided if Design by Contract/Eiffel has been used, AND IN ADDITION: a. That the proper data had been provided in the *system* specification. Although this is extensively described in the final report, the DBC paper makes no mention of it. As noted above, without such data, all arguments in support of DBC fail. b. That DBC would have caused the introduction of the critical assertion. As quoted above, it in fact implies the opposite: that "impossible" assertions may not in fact be introduced. c. That sufficient manual QA would have been done. The paper does say that proper manual QA should be done. However, it does not say how DBC supports this activity, either in terms of educating management to the need for this activity (a key issue in the Ariane 5 case), or in terms of proper performance of this activity. (Apparently, all that is needed is "salt" :) d. That sufficient automated verification would have been done. Here is the paper's statement on testing: "Is it a testing error? Not really... if one can test more one cannot test all. Testing, we all know, can show the presence of errors, not their absence. And the only fully 'realistic' test is to launch..." This, to me, is not a particularly strong endorsement of testing! So, clearly, DBC would not have been *sufficient* in this case. You emphasized this point later in your post (although, unfortunately, not in the paper): > Engineering reliability can only result from the combination of > several techniques approaching the product from different angles: > some a priori (building the product right in the first place), > some a posteriori (not believing that it is right, and test it > extensively). That this is not an either-or situation > is so obvious as to be almost a platitude (although one does > encounter people, including software engineering authors, > who believe in only the a priori part or only the > a posteriori part). So, if there are people who do not consider this a "platitiude" (as was obviously true for Ariane 5), and such considerations are essential to the use of DBC, why does the paper not state this explicitly? So, the only argument remaining is: is DBC *necessary* for the Ariane 5 case? I would say, no. If the proper flight trajectory had been provided, and the proper test would have been performed, then the SRI error would have been detected before launch. This is noted in the final report. Note that this is not an issue of 20/20 hindsight; as several people familiar with such systems have noted, such a test is standard practice. One could argue that the proper use of assertions would have been able to more easily isolate and correct the problem, or that some other system fault might have been detected through the use of DBC, but both arguments are irrelevant to the subject of the paper: that DBC would have saved the Ariane 5 from the *specific* fault which destroyed it. So, if the paper had emphasized that proper system engineering and verification testing had been done, it would have further weakened it's own case. It is not surprising, then, that no such statement is in the paper. > It's been amazing in this discussion how some postings have taken > to task our paper for ideas that are not there. > The points that it makes are pretty simple, and there for everyone > to examine (http://www.eiffel.com, link to "Ariane 5 paper"); > after reading most of the discussion, and re-reading the paper, > I think that its arguments and conclusion are perfectly in line with > many useful comments made by people who thought they disagreed with it. > Please check what it really has to say. I would appreciate a response to any or all of my rebuttals to your paper. I have tried in good conscience to quote it accurately. If you believe I have misquoted you, please say so. Unfortunately, continuing to claim that the paper answers these criticisms, without providing any details as to *how* they are answered, does not provide me with any confidence that your paper is valid. Perhaps your paper is not directed at practitioners in the flight controls or inertial reference fields. If so, then I could understand why you would ignore the arguments made by those practitioners, such as myself and Mr. White. In any case, I have posted these arguments several times without response. If you do not respond to this post, I will assume that no response is possible. > > -- > Bertrand Meyer, President, ISE Inc., Santa Barbara (California) > 805-685-1006, fax 805-685-6869, - > ftp://ftp.eiffel.com > Visit our Web page: http://www.eiffel.com > (including instructions to download Eiffel 4 for Windows) -- LMTAS - The Fighter Enterprise - "Our Brand Means Quality" For job listings, other info: http://www.lmtas.com or http://www.lmco.com