From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,a83c46b54bacb7f6 X-Google-Attributes: gid103376,public From: JP Thornley Subject: Re: JOB:Sr. SW Engineers Wanted-Fortune 500 Co Date: 2000/02/05 Message-ID: #1/1 X-Deja-AN: 581872457 X-NNTP-Posting-Host: diphi.demon.co.uk:158.152.212.133 References: <3894A823.92EC75D1@bondtechnologies.com> <874b7r$mj9$1@nnrp1.deja.com> <38967537_1@news.jps.net> X-Trace: news.demon.co.uk 949748460 nnrp-07:17620 NO-IDENT diphi.demon.co.uk:158.152.212.133 MIME-Version: 1.0 Newsgroups: comp.lang.ada X-Complaints-To: abuse@demon.net Date: 2000-02-05T00:00:00+00:00 List-Id: In article , Pat Rogers writes >Error checking at run-time is still vital, and Ada's checking (if left >in) can help. > >Although it is a common practice in (well-done!) safety-critical >development to prove that exceptions cannot occur, they still can. The >obvious cause is radiation-induced hardware errors. But for random memory events, run time checks don't do the job. Firstly because a protection mechanism where the level of protection depended on the declared range of the variable's type would not be acceptable. (For a variable of Character type, stored in one byte, the protection would be zero.) Secondly because compiler writers work very hard at removing run-time checks when it is 'known' that a run time check cannot fail. The ideal way to approach protection of data from corruption is to determine three things: 1. what are the hazards that could be caused by corruption of the data 2. what is the cost (all aspects) of identifying and correcting that corruption 3. what is the probability of the corruption occurring. When you have all this data you can make a reasoned judgement based on ALARP principles (as low as reasonably possible) to determine what to do. Unfortunately, in the work I have done in this area, we have zero data for the probability of data corruptions occurring. So a common strategy (based on the current use of cyclic schedulers) is to protect and check any data that is stored between cycles, but not to protect data values that are created and used within a cycle. This makes the data to be protected easy to identify and bounds the time that any data remains in store without being checked (and gets us out of having to protect tricky stuff such as stack frame pointers and subroutine return addresses). This works well in systems with low feedback, and where a bad output on one iteration can be tolerated but a sequence of bad outputs is not acceptable (which I suspect covers a large number of safety-critical systems). It also supports the use of pragma Suppress_All, having of course proved that no run-time check would have failed if it had not been suppressed. Note that if run-time checks are left in they create a substantial testing task. Safety-critical standards require test coverage of every branch in the executable code. So the tester must first identify where run-time checks have been compiled in and then create test data that will cause each one of them to fail - not always easy and probably requiring the use of intrusive test techniques (also regarded with deep suspicion when developing safety-critical code). Cheers, Phil -- JP Thornley