From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,a48e5b99425d742a X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,5da92b52f6784b63 X-Google-Attributes: gid1108a1,public X-Google-Thread: f43e6,a48e5b99425d742a X-Google-Attributes: gidf43e6,public X-Google-Thread: ffc1e,a48e5b99425d742a X-Google-Attributes: gidffc1e,public X-Google-Thread: 103376,a48e5b99425d742a X-Google-Attributes: gid103376,public From: rkaiser@dimensional.com (Richard Kaiser) Subject: Re: Papers on the Ariane-5 crash and Design by Contract Date: 1997/03/19 Message-ID: <5govbe$l3$2@quasar.dimensional.com>#1/1 X-Deja-AN: 226884217 Distribution: world References: <332B5495.167EB0E7@eiffel.com> <332D113B.4A64@calfp.co.uk> <5gl1f5$a26$2@quasar.dimensional.com> <332E8D5D.400F@calfp.co.uk> <5gnttg$jkc$1@quasar.dimensional.com> <5goe44$ngf$2@news.irisa.fr> Organization: Dimensional Communications Newsgroups: comp.lang.eiffel,comp.object,comp.software-eng,comp.programming.threads,comp.lang.ada Date: 1997-03-19T00:00:00+00:00 List-Id: In article <5goe44$ngf$2@news.irisa.fr>, jezequel@irisa.fr (Jean-Marc Jezequel) wrote: > >In article <5gnttg$jkc$1@quasar.dimensional.com>, rkaiser@dimensional.com > (Richard Kaiser) writes: > >>But this software was Never tested. No end-to-end test with the hardware, nor >>even a software only simulation. If the testing is not performed assertions >>will not be executed to detect a problem. With proper testing the simulated >>flight would have failed with or without the assertions. > >Yes. But in some cases, actual testing can be extremely hard to perform. >If you do not want to test during a rocket launch, you have to simulate the > supposed >environment of this rocket. aka a software simulated launch. Enabling this is > possible, >but you have to design your software *with this purpose in mind*. Most notably, >it means decoupling your software from your hardware, and simulating the > expected behavior >of the later one. Still, you're never 100% sure that you didn't miss something > in your simulation >of the environment. > >Because the Ariane4 SRI was designed as a *functional* black-box, that is >vertically integrated, this approach was simply not possible, unless you > re-designed >everything instead of re-using the black-box. You cannot fool an inertial > reference system, >e.g. to make it believe it follows a given trajectory, unless you make it > follow this trajectory, >i.e. you launch it! There are TWO ways to do this kind of testing. 1. Replace the gyros with simulation computer driven D/A converters. 2. Place the navigation unit in a three or four axis table and load the acceleration numbers via a test message or a special test port. Where I work we use both methods and can run full end-to-end testing with either the flight computer and navigation unit only or we have run a full end-to-end test with a full-up equipment section. With this testing we have gone from it should work to we have proven most of it works (aerodynamic models require actual flights to get them exact). >So our point in the paper is that yes, better testing would have caught the > problem. >But don't throw the stone to the testers. It's not always as easy as it seems. >OTOH, assertions always help software component verification during reviews. >This is where the problem should have been detected, provided it was made > explicit >enough through design by contract. The stones are going to management and to system engineering. With the cost of launch vehicles and payload (plus ground damage such as $50 for the Delta failure) how can you not perform end-to-end testing? The system group was responsible for flowing down requirements and for verifying these requirements were met. Management assumed the navigation boxes were "interchangeable hardware" black boxes when they are integrated hardware and software units. >>Why is Eiffel saying assertions are a new tool? C (and now C++) have been >>using #include for years? Software engineers have been using >>assert macros to verify program limits are not exceeded. The limits analysis can be more automated and can be more automatic and easier to use by programmer instead of software engineers. But they are not new. >For more details on that, you can check my book: >"Object-Oriented Software Engineering with Eiffel" >Addison-Wesley Eiffel in Practice Series, ISBN 0-201-63381-7 >http://www.irisa.fr/pampa/EPEE/book.html >