From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,59dddae4a1f01e1a X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Need help with PowerPC/Ada and realtime tasking Date: 1996/05/29 Message-ID: <31AC0712.29DF@lmtas.lmco.com>#1/1 X-Deja-AN: 157355824 references: <1026696wnr@diphi.demon.co.uk> <355912560wnr@diphi.demon.co.uk> <63085717wnr@diphi.demon.co.uk> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-05-29T00:00:00+00:00 List-Id: JP Thornley wrote: > > But I am talking only about those software components of the system that > have been rated as safety-critical - so, by definition, a failure of > that component to meet its requirements creates an uncontrolled risk of > a hazard occuring. I would be surprised if the exact shade of green > on a display were to be rated safety-critical. (I suspect that it is > unusual for any part of a display to be rated as safety-critical as > there will always be multiple independent sources of information). Keen! A discussion of software safety in an Ada conference! :) Actually, I have seen displays rated safety-critical, even in the presence of multiple sources of information. For example, if a head-up display shows critical flight information, the HUD might be safety-critical even in the presence of backup displays, since the pilot may be in a regime where reference to head-down information is impractical. There may also be a safety risk if the pilot is presented with conflicting data from multiple displays (one correct, one failed). > Clearly it is a software engineering responsibility to check the > requirements for incompleteness and ambiguity but, for example, if an > algorithm is specified incorrectly and this results in a valve opening > instead of it remaining closed, I do not see what is gained by claiming > that the software which implements that algorithm is unsafe. As another > way of looking at this, what actions can a software engineer take to > create safe sofware from potentially incorrect requirements (apart from > being a better domain expert than the systems engineer and getting the > requirements changed). The claim Dr. Levison makes in "Safeware" is that the protection is provided by having independent hardware fail-safes for safety-critical software. For example, in the Therac-25 example, having a hardware device that shuts down the beam automatically after a fixed time limit. The fail-safe doesn't have to duplicate the function of the software; it just has to provide a minimal shutdown capacity. I still haven't figured out a practical way to do this for my system, but I'm sure it's a good idea for certain systems. At least in my environment, the software engineer provides feedback to the domain engineer, so I suppose it is a software engineering job to get requirements changed, suggest additional safety features, etc. It sounds like the point has already been made, but it is also good to remmeber that, technically, correctness and safety don't have to be related. You can have correct software that is unsafe, and incorrect software that is safe. -- LMTAS - "Our Brand Means Quality"