From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,f66d11aeda114c52 X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,f66d11aeda114c52 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Critique of Ariane 5 paper (finally!) Date: 1997/08/19 Message-ID: <33FA748A.35FE@flash.net> X-Deja-AN: 265442602 References: <33E503B3.3278@flash.net> <33E8FC54.41C67EA6@eiffel.com> <33E9B217.39DA@flash.net> <33EA5592.5855@flash.net> <33EB4935.167EB0E7@eiffel.com> <33EB754E.446B9B3D@eiffel.com> <33EBE46D.2149@flash.net> <33EF9487.41C67EA6@eiffel.com> <33F22B91.167EB0E7@eiffel.com> <33F7C3C0.446B9B3D@eiffel.com> Organization: Flashnet Communications, http://www.flash.net Reply-To: Ken.Garlington@computer.org Newsgroups: comp.lang.ada,comp.lang.eiffel Date: 1997-08-19T00:00:00+00:00 List-Id: Bertrand Meyer wrote: > > Robert Dewar writes: > > > This is demonstrably false. There are lots of examples of highly reliable > > software written by people who don't even know what a specification is, > > let alone how to carefully associate them with software elements. > > > > If you want details on this, I can send you hundreds of thousands of > > lines of COBOL code. This code is completely inpenetrable in places, > > and I consider it pretty horrible, but it is from a completely reliable > > system, where reliability is measured in the terms that matter, i.e. > > it does what it is supposed to do in a highly reliable manner. > > This is eloquently said, but incorrect all the same. > > The definition of reliability which this implies is that a system > is "highly reliable" if it has been working satisfactorily for, > say, 30 {seconds | minutes | hours | days | weeks | months | years} > -- pick one. This is one possible definition of reliability, which gets > more and more interesting as it moves to the right of the list > of choices; but it is by no means the only "terms that matter". > > True, it may be pragmatically sufficent in some cases. > For example if you tell me my car insurance is being computed > by a program that has worked "satisfactorily" for 10 years, I > may be reasonably pacified. After all, even though I know that > processing my case may hit a bug that no one encountered > before, the (damage * likelihood) factor is small enough for > me to accept the risk (although, being a computing scientist, > I might ask for evidence of what exactly you mean by > "satisfactorily" or even, using Prof. Dewar's terms, > "highly reliably"). > > But for a mission-critical system of the kind that started > this discussion -- one, such as Ariane, whose non-high-reliability > may imply loss of life, or destruction of huge amounts of > property, or some other catastrophic event -- that is not > good enough. "The software has passed the acceptance tests" or > even "the software has worked properly in the first two missions" > is interesting, but not sufficient to allay the concerns > of, e.g., regulatory agencies. I have seen requirement > specifications for mission-critical systems which > include formal conditions such as "at most one failure for each > 10^n passengers", where n is a very large number -- > corresponding, say, to 500-year continuous operation. > Leaving aside the question of whether we can honestly make such > promises (since the scientific debate should not prevent us > to try our best as engineers), we are NOT going to reassure, > let alone satisfy our censors through pragmatic reliability > arguments of the form "it has worked well so far, ergo it is > reliable". They will listen with interest but they will demand > more. > > Part of what they will demand will be guarantee about our software > process -- something in the line of ISO 900x, Capability Maturity > Model, quality assurance plan and the like. But they will also > want to know what software techniques we use and, in particular, > why we believe that each software component does its > job -- especially a reusable software component pulled out > from somewhere else. This is where there is simply no substitute > for Design by Contract: a necessary condition, as I wrote. I have _safety-critical_ software systems in operation today that were developed with a process that has been independently evaluated to both ISO 9001 and SEI III. The products have been validated using MIL-STD-882B and DoD-STD-2167, and also meet all validation requirements for RTCA/DO-178B (although no customer has asked for this yet). No agency responsible for acceptance of my products has ever raised the issue of Design by Contract in the _specific_ sense used in the Ariane paper. I am also familiar with systems accepted under MoD DEF STANs (C-130, in particular), and they did not require Design by Contract (although SPARK was used on that system). Can you identify a regulatory software safety standard that specifically calls out Design by Contract? If so, please provide the citation; I would be interested in reviewing it. > By no means sufficient (it makes sense as part of an array > of reliability mechanisms, some technical, some organizational); > but, until someone comes with a better idea to guarantee > reliability, necessary. There are quantitative software reliability models; none, to my knowledge, call out Design by Contract as specifically described in your Ariane paper. > The recently deceased Harlan Mills, then of IBM, published > in 1975 (Proc.Intl. Conf. on Software Reliability, > see reference in OOSC-2) a paper entitled > > "How to write correct programs, and know it." > > This title truly capture the essence of the problem. It is > not sufficient, as Prof. Dewar seems to imply, to write > reliable programs (where reliability includes correctness) > which we think are reliable because they have worked well > enough for long enough. To convince others (and ourselves) > that they are indeed reliable, we need better arguments. > These arguments can only come from a precise description > of what the elements are supposed to do -- the contracts -- > and some evidence that the implementation does honor the > contracts. Without these mechanisms in place, we cannot > even discuss properly the reliability of our software, > since we haven't even defined what we are talking about. See the work of Levison or Musa to alternative ways to make these "arguments." > > So when Robert Dewar writes > > > "Associating specifications with software elements" is an important tool > > that will aid in the production of reliable software. > > > Of course no one will disagree with that, what people might disagree with > > is the huge leap to say > > > "Therefore any system that does not use DBC is not reliable" > > I believe he is wrong on the second point, at least if we > replace "reliable", in the last line, by "demonstrably > reliable" (using "demonstrably" not in the absolute > mathematical sense but in the practical sense of being able > to convince competent colleagues). I can provide specific factual evidence to the contrary. Competent colleagues can be convinced through alternative means. > > My view here is, apparently at least, the exact reverse of > his. There is no claim whatsoever that Design by Contract > is "sufficient". I would be happy to know a magical recipe > to guarantee software reliability, but I don't. I do know, > however, techniques that tremendously improve reliability > (techniques that have saved me and many other people > countless bugs, including some potentially spectacular > ones), and without which it is not possible, in the > state of software technology as I understand it, to achieve > similar results. In short, necessary conditions. > > I understand that not everyone is convinced, and that some may > think the claims are too grandiose even though they are really > very modest and down-to-earth; but that does not diminish the > relevance of these techniques to software reliability, the > importance for every software engineer of learning them, > and, yes, their necessity. Can this necessity be correlated to a safety-critical system in production? In what regulatory context was DBC used to certify the system? > > -- > Bertrand Meyer, President, ISE Inc. > ISE Building, 2nd floor, 270 Storke Road, Goleta CA 93117 > 805-685-1006, fax 805-685-6869, > http://www.eiffel.com, with instructions for free download > == ISE Eiffel 4: Eiffel from those who invented it ==