From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.6 required=5.0 tests=BAYES_00,HK_RANDOM_FROM, HK_RANDOM_REPLYTO,REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,885dab3998d28a4 X-Google-Attributes: gid103376,public From: cmcknigh@hercii.lasc.lockheed.com (Chris McKnight) Subject: Re: Ariane 5 failure Date: 1996/09/29 Message-ID: <1996Sep29.193602.17369@enterprise.rdd.lmsc.lockheed.com> X-Deja-AN: 186085874 sender: news@enterprise.rdd.lmsc.lockheed.com (News Administrator) references: organization: Lockheed Martin Aeronautical Systems reply-to: cmcknigh@hercii.lasc.lockheed.com newsgroups: comp.lang.ada Date: 1996-09-29T00:00:00+00:00 List-Id: In article Hzz@beaver.cs.washington.edu, pattis@cs.washington.edu (Richard Pattis) writes: >As an instructor in CS1/CS2, this discussion interests me. I try to talk about >designing robust, reusable code, and actually have students reuse code that >I have written as well as some that they (and their peers) have written. >The Ariane falure adds a new view to robustness, having to do with future >use of code, and mathematical proof vs "engineering" considerations.. An excellent bit of teaching, IMHO. Glad to hear they're putting some more of the real world issues in the class room. >Should a software engineer remove safety checks if he/she can prove - based on >physical limitations, like a rocket not exceeding a certain speed - that they >are unnecessary. Or, knowing that his/her code will be reused (in an unknown >context, by someone who is not so skilled, and will probably not think to >redo the proof) should such checks not be optimized out? What rule of thumb >should be used to decide (e.g., what if the proof assumes the rocket speed >will not exceed that of light)? Since software operates in the real world (not >the world of mathematics) should mathematical proofs about code always yield >to engineering rules of thumb to expect the unexpected. A good question. For the most part, I'd go with engineering rules of thumb (what did you expect, I'm an engineer). As an engineer, you never know what may happen in the real world (in spite of what you may think), so I prefer error detection and predictable recovery. The key factors to consider include the likelihood and the cost of failures, and the cost of leaving in (or adding where your language doesn't already provide it) the checks. Consider these factors, likelihood and cost of failures: In a real-time embedded system, both of these factors are often high. Of the two, I think people most often get caught on misbeliefs on likelihood of failure. As an example, I've argued more than once with engineers who think that since a device is only "able" to give them a value in a certain range, they needn't check for out of range values. I've seen enough failed hardware to know that anything is possible, regardless of what the manufacturer may claim. Consider your speed of light example, what if the sensor goes bonkers and tells you that you're going faster? Your "proof" that you can't get that value falls apart then. Your point about reuse is also well made. Who knows what someone else may want to use your code for? As for cost of failure, it's usually obvious; in dollars, in lives, or both. As for cost of leaving checks in (or putting them in): IMHO, the cost is almost always insignificant. If the timing is so tight that removing checks makes the difference, it's probably time to redesign anyway. Afterall, in the real world there's always going to be fixes, new features, etc.. that need to be added later, so you'd better plan for it. Also, it's been my experience that removing checks is somewhere in the single digits on % improvement. If you're really that tight, a good optimizer can yield 10%-15% or more (actual mileage may vary of course). But again, if that makes the difference, you'd better rethink your design. So the rule of thumb I use is, unless a device is not physically capable (as opposed to theoretically capable) of giving me out of range data, I'm going to range check it. I.E. if there's 3 bits, you'd better check for 8 values regardless of the number of values you think you can get. That having been said, it's often not up to the engineer to make these decisions. Such things as political considerations, customer demands, and (more often than not) management decisions have been known to succeed in convincing me to turn checks off. As a rule, however, I fight to keep them in, at very least through development and integration. > As to saving SPEED by disabling the range checks: did the code not meet its >speed requirements with range checks on? Only in this case would I have turned >them off. Does "real time" mean fast enough or as fast as possible? To >misquote Einstein, "Code should run as fast as necessary, but no faster...." >since something is always traded away to increase speed. Precisely! And when what's being traded is safety, it's not worth it. Cheers, Chris ========================================================================= "I was gratified to be able to answer promptly. I said I don't know". -- Mark Twain =========================================================================