From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=BAYES_00,RATWARE_MS_HASH, RATWARE_OUTLOOK_NONAME autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,a48e5b99425d742a X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,a48e5b99425d742a X-Google-Attributes: gid103376,public X-Google-Thread: f43e6,a48e5b99425d742a X-Google-Attributes: gidf43e6,public X-Google-Thread: 1108a1,5da92b52f6784b63 X-Google-Attributes: gid1108a1,public From: "Roger T." Subject: Re: Please do not start a language war (was Re: Papers on the Ariane-5 crash and Design by Contract Date: 1997/03/20 Message-ID: <01bc356c$e3aae860$371883cc@beast.advancedsw.com> X-Deja-AN: 227102508 References: <332B5495.167EB0E7@eiffel.com> <5giu3p$beb$1@news.irisa.fr> <332ED8AB.21E7@lmtas.lmco.com> <199703190839.JAA02652@stormbringer.irisa.fr> <33302A36.7434@lmtas.lmco.com> Organization: Advanced Software Technologies Newsgroups: comp.lang.eiffel,comp.object,comp.software-eng,comp.lang.ada Date: 1997-03-20T00:00:00+00:00 List-Id: In general I agree wholeheartedly with Ken's comments but thought I would add some extra comments. Ken Garlington wrote in article > Jean-Marc Jezequel wrote: > > In article <332ED8AB.21E7@lmtas.lmco.com>, Ken Garlington > > > > > 2. No one ran a full integration test with realistic flight data, which would have > > > alerted them to the mistake made in #1. Particularly for a distributed mission- > > > critical system, this should be considered an absolute > > > requirement. > > > > Yes, this is true. But you have to understand that the mere design of the SRI > > made it very difficult to test it in an other way that performing a launch. The above statement is patently untrue and reflects a shocking ignorance of the hardware-in-the-loop simulation practices that have been standard in aerosystems development for decades. The idea of launching a vehicle *WITH PAYLOAD* and saying that the flight is meant for testing is ...is ........, gad I am *speechless* at the idea!!! > > This is because of the tight integration of hard-to-fool hardware with software > > in a black box functional unit. Bull. *All* modern launch systems, on-orbit systems, aeronautical systems, nautical systems, missile systems that have embedded realtime control software components should be tested extensively in both all-digital simulation and hardware-in-the-loop simulation against *every* mathematically conceivable operational parameter set. Hardware is in fact *not* hard to fool. I personally have developed extensive 6DOF HWIL testing facilities that used rate tables, 3DOF tables, wind tunnels, enviro chambers, etc., etc.. to test missile systems, launch systems and robotic vehicles in realistic environments. If you develop intelligent Monte Carlo testing programs covering the entire range of operational parameters you will catch these simple flaws. > We have exactly the same coupling of inertials to flight controls on a > current project, and > we are able to test the coupled system in a black box environment in our > labs, with pilots in the loop, performing the same flight profiles we > expect to see in operation. Absolutely. This is standard procedure. > Interestingly enough, the IRS for the this project is also "reused" from > a different platform, and there was a proposal to minimize testing of > the system because it had already > been "proven." We were able to quash that "cost-savings" measure before > it gained much > support. However, in an environment with Cost As an Independent Variable > (CAIV), I can > certainly see how easy it would be to adopt such ideas. It is possible to do targeted testing that recognizes previous testing but that can only be done if the systems analysis shows that the assumptions used for minimizing testing are valid ones. > > What can be your test strategy for a black box > > containing an inertial central? Real-time HWIL rate table and 3DOF table simulation testing. The output data from these tests would have immediately shown a trajectory failure when the problematic profiles were simulated. > If the software had been designed with less coupling > > on this particular hardware, you could have test the software vs. a simulation of the > > hardware and its environment. Test both together in full operation. >Here, the launch was the test. Brain damage. > Is this what Aerospatiale told you? ;-) > > I don't see where it denigrates sound testing. IMHO, sound testing *is* needed. > > And testing is effective only if you have a sound test strategy. > > Even more when you use programming by contract. In the paper, > > we just recall that you cannot rely on tests only. But in this case the flaw would have *IMMEDIATELY* jumped out at any qualified engineer performing proper HWIL testing. > Although I think programming by contract is worthwhile, I still contend that there > is no compelling reason to believe it would have avoided *this particular problem*, for > two reasons: > > 1. There is strong evidence to believe that the Ariane IV SRI team did > not see this as a particularly important assertion to make (otherwise, they would Also even if assertions are included it is still possible that the list of assertions could suddenly become incomplete when the module is used in another context. It all comes back to incomplete domain analysis. > 2. There is some evidence to indicate that the Ariane V team would not > have considered themselves in violation of the assertion, even if it had been > explictly coded. Based on my experience in similar situations, it > is sometimes difficult to translate such high-level information as a flight > profile to the specific range of parameters to be passed to a particular unit. Absolutely right. Which is why HWIL testing of full up flight profiles, with and without hardware in the loop, is considered standard procedure in complex dynamic systems development. >Additionally, there is no evidence that the software team, reading the source code, had > sufficient systems knowledge and experience to detect the violation of the assertion > strictly from reading the code. Dead on. > Based on my experience with aerospace systems, and in particular > integrated inertial and flight control systems (from the flight controls side), I see this > scenario as entirely possible. In this case, no amount of documentation at the code > level would have solved the problem, and no amount of embedded run-time checks >would solve the problem in the absence of realistic testing. Absolutely. And realistic testing is only possible if you have engineers, I mean real engineers not CS people, with knowledge and experience in control theory, signal processing, orbital mechanics, aeromechanics, structures, materials, finite element analysis, etc. These people should be integrally involved in all software design that affects the dynamic performance of a vehicle. CS people are too willing to put in some hack to that solves a minor programming problem without realizing that the change may propagate through the control loops and cause problems with overall performance. I have seen it firsthand many times. For example. When designing target tracking software for guided missiles I have seen CS trained people start randomly add filtering logic to the tracking algorithms because of track-lock instabilities. They were baffled when I told them that their "filter", from a system perspective, was a feedback compensator and caused the missile to be unable to achieve intercept. It tracked beautifully until the end game. But in end-game their software filter was removing the high rate information that was needed for closure on a now wildly maneuvering target. The missile tracked wonderfully until about 200 ft. from the aircraft. It then lost track and flew harmlessly past the target. The problem is not the language or the lack or presence of assertions. The problem is having people doing design who don't understand dynamic systems, especially feedback. Roger T.