From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=BAYES_00,INVALID_MSGID, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: The stupidity of all the Ariane 5 analysts. Date: 1997/07/24 Message-ID: <33D7F5E9.5F38@flash.net>#1/1 X-Deja-AN: 258793505 References: <33C835A5.362A@flash.net> <33CC0548.4099@flash.net> <5qitoi$fdv$1@news.irisa.fr> <33CD6512.2404@flash.net> <01bc92e6$7a6f9e40$287b7b7a@tlo2> <33CEAF05.6389@flash.net> <33D2827B.41C67EA6@eiffel.com> <5qucs7$jie$3@flood.weeg.uiowa.edu> <33D3F2CB.61DC@flash.net> <5r4952$nca$1@flood.weeg.uiowa.edu> Organization: Flashnet Communications, http://www.flash.net Reply-To: kennieg@flash.net Newsgroups: comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-07-24T00:00:00+00:00 List-Id: Robert Dewar wrote: > > Robert White said > > << One could argue that the above is just a nit (flawed example) and > the overall premise of DBC/Eiffel as a "silver bullet" panacea versus > conventional "aerospace industry software engineering process" is > the real issue. > >> > > Unfortunately the Ariane crash was the result of just such a nit! > > I would think experience would teach us to be very suspicious of any > claims by anyone to have a silver bullet panacea wjhen it comes to > software engineering for large systems. > > However tools and techniques that help, even a little bit, are welcome > even if they are not a panacea (I consider Ada to be in this category, > and I always find it annoying when people advertise Ada as being > a silver bullet panacea). I agree with this, but I also think every tool and technique has both advantages _and risks_. Failing to understand the risks, and where they are (and aren't) important, can lead to extremely dangerous choices made with the best of intentions. I view executable assertions (Ada, Eiffel, etc.) in this category. In some environments, they are clear wins. In other environments, I think it is at least debatable whether their advantages outweigh their risks. Complicating this factor is that many people confuse the abstract Good Thing (e.g. well-documented assumptions) with a specific implementation of the Good Thing. They consider the Good Thing so obvious that they don't consider (a) what Bad Things may also be associated with the _implementation_ -- the proponents of the implementation aren't always candid about this part (or are sometimes just blind to it) and (b) whether, for a given project, there are better ways to achieve the Good Thing. As you've said in the past, Computer Science is in part a misnomer in that there doesn't seem to be much in the way of controlled experiments. So, a lot of these risk/reward analyses are based on incomplete data, at best. This doesn't mean that they shouldn't be done, however. Ignoring the risks (or worse, dismisssing them as "they just haven't seen the light yet," as Mr. Meyer did recently) is not smart in any environment, and certainly not in safety-critical systems.