From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,f66d11aeda114c52 X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,f66d11aeda114c52 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Critique of Ariane 5 paper (finally!) Date: 1997/08/29 Message-ID: <3406B608.7C9@flash.net> X-Deja-AN: 268908712 References: <33E503B3.3278@flash.net> <33E8FC54.41C67EA6@eiffel.com> <33E9B217.39DA@flash.net> <33EA5592.5855@flash.net> <33EB4935.167EB0E7@eiffel.com> <33EB754E.446B9B3D@eiffel.com> <33EBE46D.2149@flash.net> <33EF9487.41C67EA6@eiffel.com> <33F22B91.167EB0E7@eiffel.com> <33F7C3C0.446B9B3D@eiffel.com> <33FA748A.35FE@flash.net> <33FBD62C.3DD3@invest.amp.com.au> <33FD8685.AAAE3B4F@stratasys.com> <33FE8732.4FBB@invest.amp.com.au> <33FFA324.4DB9@flash.net> <34013F3E.27D4@invest.amp.com.au> Organization: Flashnet Communications, http://www.flash.net Reply-To: Ken.Garlington@computer.org Newsgroups: comp.lang.ada,comp.lang.eiffel Date: 1997-08-29T00:00:00+00:00 List-Id: Thomas Beale wrote: > > Ken Garlington wrote: > > > I've been caught out using a very informal argument and relying > on "commonly agreed" norms of software engineering. But as you > imply, (and I have done often enough, many of these can be > wrong). So... > > > > Jeff Kotula wrote: > > > > 1. Any evaluation of a system's reliability must be based on its > > > > requirements specification. > > > > False. The Musa model, for example, does not require examination > > of a requirements specification. > > I don't know what the Musa model is; does its definition of reliability > of a system rely in no way on the requirements (or any transformation of > any expression of it)? If not, does its idea of reliability not relate > to any expression of the correctness of the system (of which the > requirements spec is an instance)? In a grossly oversimplified sense, it requires a set of tests performed according to certain groundrules, the collection of faults detected over time from those tests, and the application of a model to determine the residual faults. I suppose you could claim that the tests would be based on the requirements specification, although they could just as easily be generated by the end user without reference to that specification. It certainly relates to an expression of the "correctness" of the system, but I never claimed otherwise. The key of Musa (and several other models) is to execute the system over time based on it's predicted operational profile (how it is expected to be used). This is not necessarily the same as what's in the requirements spec, which may be written more in terms how it _could_ be used. There's a large amount of information available on quantitative software reliability; I've been planning to get Lyu's "Handbook of Quantitative Software Reliability," which I've heard is good. > > > > 2. Any evaluation of a system's reliability will directly correlate with > > > > the reliability of its component's reliability. > > > > For software, not true. See "Safeware." The sum of the component > > reliability > > does not necessarily equal the system's reliability. (This is a common > > mistake made when attempting to apply hardware reliability models to > > software.) > > I would have to agree here. A better statement would have been: > "A system's unreliability will strongly correlate to the unreliability > of the components". I.e., you can build rubbish out of good bits, but > its unlikely (if not impossible) to build a good system out of > broken components. There's _some_ correlation; it depends upon the system design, etc. as to the strength of the correlation for a given component. In the extreme case, a completely broken component that's never executed will not degrade system reliability. More to the point, individual components that, individually, have high reliability may exhibit low reliability when used together. > > > > 3. Any evalution of a component's reliability must be based on its > > > > requirements specification. > > > > Again, not true. See above. > > I would need to be convinced by this "Musa" argument... > > > > which closes the loop from requirements to implementation. I think > > > this argument is a pretty fair statement of why DBC is not just another > > > tool in the toolkit: (to paraphrase the above) DBC is a way of > > > formalising, implementing, and enforcing the requirements specification > > > at the design and implementation levels of abstraction. > > > > Unfortunately, there can be elements of the requirements specification > > which cannot be enforced at the implementation level (e.g. certain > > classes of assumptions about the external environment). > > Naturally... probably the majority (measured by pages of document or > whatever) is informal or semi-formal, and does not map directly to > the various levels of design/implementation. But on the other hand, > if contracts were used as a formalism in the reqts spec, then the > ability to map through would be better. Or (and IMHO, this is the more likely outcome) the informal requirements are merely pushed up a level, into a system document for example. There can certainly be exceptions; a spec for math routines can be formalized, for example, but a general-purpose system involving a human in the specification will likely, at some point, require a natural language description of the expectations. > > > Where I think there is more room for debate is in how DBC should be > > > implemented. I have used Eiffel's DBC for some time, and it is miles > > > ahead of no DBC. > > > > What did you use before DBC? > > At university (Qld, Australia), we used formal assertions (a la > Dijkstra) to specify the meaning of program components. There > were no tools to support this, but we could easily code up test > cases to test correctness by encoding these statements (in > "pascal plus"). > > I spent about 6 years in distributed real-time SCADA systems > coded in K&R C, and at the end, had come to the conclusion (along > with other team members and developers) that modules built using > PRE_CONDITION, POST_CONDITION, and CHECK (macro) statements > in the code were better because: > > a) they were more readable & thus maintainable; > > b) they produced much more useful man pages (we built automated > back-extraction tools) - developers could avoid defensive > programming & complain to the authors if their program > died in a way forbidden by the assertion statements; > > c) they were _much_ easier to debug (we used the Fred Fish dbug > macros as well, and integrated them with the assertions); > > d) they acted as a repository for design-level decisions, which > began to replace voluminous design documentation which was > immediately out of date; > > e) the incentive was there to write more generic, re-usable > modules, since for the first time, client developers had > a trustworthy way of knowing what a given module did, > without having to look at the code: the (automatically > extracted) man page with both function signatures (like > you get if you do "man strcat" for example, on a unix > system), and the pre- and post-condition statements.... > this was treated as the "truth". A new experience - > trust in software not written by me - was discovered. > > Although we never had the means or time to make comparative > studies of code done this way versus the old way, I believe > we were able to write much better software, more quickly, and > that it was more reliable because a) our ability to test it > was much better, and b) it tended to get re-used more often > (and hence further tested). > > I should mention that we weren't doing this for amusement > either: these systems control e.g. gas pipelines, electric > power transmission and distribution systems (e.g. Sydney...), > and passenger rail systems. Reliability was the most > important thing in the world. > > As well as the eariler Dijkstra/Hoare etc work, Meyer's > original OOSC was a major influence on this work, since it > encouraged us to think it would be possible to do this > kind of thing in actual software (not hidden away in > some document), and it showed us what reasonable syntax > might look like for object-oriented software. > > Since then I have felt that formalisms that don't support > these ideas are way behind those that do. Eiffel is the > only out-of-the-box formalism in this category that is > available for building object-oriented software, but other > formalisms (even the augmented C "formalism" I describe > above) have great merit. It just depends on how much pain > you want to go through to make reliable software. > > I have since used Eiffel for some years because > - I am allergic to pain, > - it does the rest of the OO paradigm so well. > > - thomas beale