From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=BAYES_00,INVALID_MSGID, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,59dddae4a1f01e1a X-Google-Attributes: gid103376,public From: JP Thornley Subject: Re: Need help with PowerPC/Ada and realtime tasking Date: 1996/05/28 Message-ID: <63085717wnr@diphi.demon.co.uk>#1/1 X-Deja-AN: 157293442 x-nntp-posting-host: diphi.demon.co.uk references: <1026696wnr@diphi.demon.co.uk> <355912560wnr@diphi.demon.co.uk> x-mail2news-path: relay-4.mail.demon.net!post.demon.co.uk!diphi.demon.co.uk organization: None reply-to: jpt@diphi.demon.co.uk newsgroups: comp.lang.ada Date: 1996-05-28T00:00:00+00:00 List-Id: In article: Robert Dewar writes: > > JP Thornley said > > "My view is that code can never be judged as safe or unsafe - only > correct or incorrect. However my usage of the words "safe" - and > "safety-critical" carries a lot of additional baggage, and it is > possible that we are differing over the meaning of these words rather > than anything fundamental. > " > > I think that is completely wrong. Correctness, i.e. formal conformance > between the implementation and the specification, is neither necessary > nor sufficient for safety. > > It is not necessary, because there can be deviations that are not > life-critical, e.g. if the horizon display on the pilots console > is not the specified shade of green, it is not critical. > > It is not sufficient, because the formal specification may be incomplete > or incorrect. > But I am talking only about those software components of the system that have been rated as safety-critical - so, by definition, a failure of that component to meet its requirements creates an uncontrolled risk of a hazard occuring. I would be surprised if the exact shade of green on a display were to be rated safety-critical. (I suspect that it is unusual for any part of a display to be rated as safety-critical as there will always be multiple independent sources of information). Incomplete or incorrect requirements for a software component affect *system* safety - just as they would for any other type of component. Clearly it is a software engineering responsibility to check the requirements for incompleteness and ambiguity but, for example, if an algorithm is specified incorrectly and this results in a valve opening instead of it remaining closed, I do not see what is gained by claiming that the software which implements that algorithm is unsafe. As another way of looking at this, what actions can a software engineer take to create safe sofware from potentially incorrect requirements (apart from being a better domain expert than the systems engineer and getting the requirements changed). -- ------------------------------------------------------------------------ | JP Thornley EMail jpt@diphi.demon.co.uk | ------------------------------------------------------------------------