From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,59dddae4a1f01e1a X-Google-Attributes: gid103376,public From: eachus@spectre.mitre.org (Robert I. Eachus) Subject: Re: Need help with PowerPC/Ada and realtime tasking Date: 1996/05/28 Message-ID: #1/1 X-Deja-AN: 157245602 references: <1026696wnr@diphi.demon.co.uk> organization: The Mitre Corp., Bedford, MA. newsgroups: comp.lang.ada Date: 1996-05-28T00:00:00+00:00 List-Id: In article <355912560wnr@diphi.demon.co.uk> JP Thornley writes: > My view is that code can never be judged as safe or unsafe - only > correct or incorrect. However my usage of the words "safe" - and > "safety-critical" carries a lot of additional baggage, and it is > possible that we are differing over the meaning of these words > rather than anything fundamental... > So safety is measured by (usually) small but definitely non-zero > numbers; software is either correct or not, with no numeric scale. You must work with a different type of software than I do. With a restricted input set, such analysis can be right. But I (and a lot of other people) find that some system requirements delegated to the software do have percentages or failure tolerances attached. For example, in a target rich environment, what is the probability that two detections of a target from a pulse doppler radar will not be correlated? There are failure rates for the sensors, some tied to the geometry of the radar or the signatures of the target, and some failures due to the algorithms (and software) used. This is not just a military issue. Many years ago, there were several planes that crashed using radar altimeters. It turned out that under certain circumstances the bottom return off of a lake was read instead of the surface reflection. (Usually at night, when a thin skin of ice resulted in very smooth surfaces which was read at the wrong angle for specular reflections.) Oops! Pilots would adjust their barometric barometers to match, then crash on final approach. Once the problem was known, it was possible to "smarten up" the radar. Similarly, in a C3I system, what is the probability that incorrect inputs will be detected? The original error may be an operator error, but the error detection software gets tagged with the error budget for undetected errors. Last but not least, there is the ugly ghost of Godel. Systems below a certain level of complexity can be 100% right or 100% wrong. Above that line, software systems look a lot more like hardware. -- Robert I. Eachus with Standard_Disclaimer; use Standard_Disclaimer; function Message (Text: in Clever_Ideas) return Better_Ideas is...