From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,5ac12f5a60b1bfe X-Google-Attributes: gid103376,public X-Google-Thread: f43e6,5ac12f5a60b1bfe X-Google-Attributes: gidf43e6,public From: nigel@access2.digex.net (Nigel Tzeng) Subject: Re: Ariane 5 - not an exception? Date: 1996/07/29 Message-ID: <4tjj6h$1oo@access2.digex.net>#1/1 X-Deja-AN: 171028190 references: <285641259wnr@diphi.demon.co.uk> organization: Express Access Online Communications, Greenbelt, MD USA newsgroups: comp.software-eng,comp.lang.ada Date: 1996-07-29T00:00:00+00:00 List-Id: In article <285641259wnr@diphi.demon.co.uk>, JP Thornley wrote: >In article: simonb@pact.srf.ac.uk (Simon >Bluck) writes: [snip] >> Of the two handling options, neither is really acceptable. However, >> there is a third option which ought to be considered: to continue but >> mark the processed data as suspect. >> > >Simon then goes on to describe a way of dealing with data validities >that unfortunately breaks the most fundamental rule of safety-critical >code - Keep It Simple. It's an idea that might work with >mission-critical code, but the thought of implementing it for >safety-critical code (remembering that any one of these systems is >probably handling in the range 200-500 pieces of data - each with its >associated data validity) is beyond anything that I know how to tackle. > >(and I've just realised that each of these 'truth values' and the data >type information will require their own data validities - this gets >even more complicated than I first thought) Actually we do this all the time...on the ground. Generating "truth values" isn't very different (if I understand it correctly) from doing limit checking and detection of stale data in the telemetry stream. Now the primary caveat is that on the ground we have tons of CPU to throw at the problem. Not something you can do with a 1750A or even a 386. Actually if you flag the data when an overflow exception is generated it wont be too bad at all...hmmm...and it gives all downstream processes visibility in which data point(s) went bad. Not as good as true limit checking but much much faster. It would be annoying to shoehorn this into legacy code but if you're doing something from scratch...you need a reasonably transparent way of associating data points with truth values. That could be as simple as creating your own int and float classes. I wont hazard a guess at what performance hits you'd take though...and there are simpler ways (already suggested elsewhere in this thread) to solve this particular problem and it doesn't address the higher level problem of the failover mechanism from primary to backup. >Phil Thornley Nigel