From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,e01bd86884246855 X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,fb1663c3ca80b502 X-Google-Attributes: gid103376,public From: "Howard W. LUDWIG" Subject: Re: Interresting thread in comp.lang.eiffel Date: 2000/07/13 Message-ID: <396DEEE8.60B3F5C7@lmco.com> X-Deja-AN: 645970067 Content-Transfer-Encoding: 7bit References: <8ipvnj$inc$1@wanadoo.fr> <8j67p8$afd$1@nnrp1.deja.com> <395886DA.CCE008D2@deepthought.com.au> <3958B07B.18A5BB8C@acm.com> <395A0ECA.940560D1@acm.com> <8jd4bb$na7$1@toralf.uib.no> <8jfabb$1d8$1@nnrp1.deja.com> <8jhq0m$30u5$1@toralf.uib.no> <8jt4j7$19hpk$1@ID-9852.news.cis.dfn.de> <3963CDDE.3E8FB644@earthlink.net> <3963DEBF.79C40BF1@eiffel.com> <2LS85.6100$7%3.493920@news.flash.net> <8k5aru$1odtq$1@ID-9852.news.cis.dfn.de> <8k8pk2$20cab$1@ID-9852.news.cis.dfn.de> <_dS95.9945$7%3.666180@news.flash.net> X-Accept-Language: en,pdf Content-Type: text/plain; charset=us-ascii Organization: Lockheed Martin -- Information Systems Center Mime-Version: 1.0 Newsgroups: comp.lang.ada,comp.lang.eiffel Date: 2000-07-13T00:00:00+00:00 List-Id: David K Allen wrote: > [snip] > > When I am writing a routine that uses a function (like 'convert'), and I see > a precondition (like hb <= Mb), I say to myself: > "As the client (user) of the routine, I better be prepared to honor its > preconditions or I risk failure." > So I take either of two approaches in such a case. I either choose to test > the precondition myself, before calling 'convert', or I choose to ignore it, > and assume that an error MAY occur in the precondition, and I prepare to > trap the exception that may result and handle the failure. > That is a style point for systems where reliability is important. > But this seeems equivalent in my mind to "protecting the variables." > And the inquiry noted (your footnote (6)) > > "It is important to note that the decision to protect certain variables > but not others was taken jointly by project partners at several contractual > levels" > > Yes, I know - the reason for this jointly agreed decision was cited as > maintaining CPU performance. My only claim is that if DBC were part of the > coding culture, it would have been less likely that that decision would have > been "jointly" agreed to. If I were working on high-risk stuff with a team > of others trained in DBC, I can't see that we would reach a consensus to > ignore the warning of a precondition. It just would not be done without > management overriding us. You seem to be _starting_ to understand the true situation in the aerospace domain, but you still have a ways to go. In the business applications world, most of the time you don't really care much about how long some action takes. If the program spews out an answer in 0.5 s instead of 0.4 s, the user will hardly notice. It's not the end of the world if the time is 15 s versus 10 s. With existing technology trends, if 15 s is too slow, add 128 megabytes of RAM and wait a month for a computer that has a 100 MHz higher clock rate, or network in a second computer today, and just plug it all into the nearest wall socket in your air-conditioned room--no big deal. In the space environment, you don't have any such privilege. In order to survive the wild temperature extremes (which make the Saharan summer and Siberic winter weather seem comparable and mild) and high-energy, highly ionized particles (such as from solar flares, radiation belts, etc.), processors must be hardened. There is not a lot of demand world-wide for such processors, so choice and supply are rather limited. The most common option is a MIL-STD-1750 16-bit architecture processor--they run quite a bit more slowly than your Pentiums and PowerPCs (by about a factor of 10 if I recall correctly). You have very limited space, power, cooling mechanism, etc. Weight is at a premium. From what I remember in the 1970s (and it should still be close to true), to add a bank of RAM or a processor of mass m, you have to add 8m of fuel to achieve the same Earth orbit, or take m away from your payload (which has already been trimmed way down). Your processor is running in a hard-real-time environment. If you fail to provide updated guidance commands _every_ 0.01s to 0.02 s (can't miss even one time), you go into a mission-ending oscillation or a hardware lock-up, and there are lots of matrix calculations that have to be done in that time. I have worked on one processor operating on a 3 600 Hz cycle--that means _all_ your computations must be completely redone and finished every 0.000 278 s; it's fine (in some sense) if you get done in 0.000 277 s, but it's fatally flawed if it ever takes 0.000 279 s. Generally, some margin is required, because things can always go wrong (such as there being a case worse than what you thought was worst, or there's a little dither because of IO bottlenecks, cache misses, etc.)-- in the case of Ariane 5, what was believed to be worst case must be completed in at most 80 % of the maximum allowed time. Such margins are set up rigidly at the beginning based on undistracted, rational thinking so that when the real pressure hits and you are eating into the margin, nobody can rationalize without a clear, unbiased thought process moving to 81 %, 82 %, 83 %, ... and creeping on up until you've done something really stupid and crash the whole system. (In most systems without an outside agency arbitrarily directing the margin, engineers will come up with an appropriate value being somewhere in the 70 % to 85 % range, with 80 % being very common. When the bureaucrats get involved and impose something from the outside, such as the USA Dept. of Defense, it was common to allow usage of only 50 % of what is available.) Given the amount of processing that must be done, the limitations on processor capability, and the hard timing deadline, it is hard to get the actual required processing done-- never mind any assertion checking. We go into projects with ideals like "With this faster, new processor, perhaps we can finally leave some run-time checking in the code", but the systems engineers almost always come up with some additional ways to expend the added throughput on tighter, better (more calculation-intensive) processing requirements, so at the end we say "Oh, well, life as usual--take out the run-time checks." It's not something we want to do--it's just a fact of life that we grow accustomed to. If we don't do it, the system won't work, so it's _not_ a matter of some managerial bureaucrat directing us to take out the checks over our dead bodies--we are engineers and want our systems to work and it just plain _won't_ work with the checks left in. Over the last three years I have finally started seeing processor capability grow faster than systems engineers ability to use it for endoatmospheric applications (for which the hardness of the processor is not as much of an issue), so throughput is becoming rapidly less of a problem in many such projects. Of course, Ariane 501 was 1996 [for production, and even older design] and both endo- and exo- atmospheric. As a sidenote, these very tight timing constraints are why hard-real-time software developers avoid like the plague languages such as Eiffel in which garbage collection is mandatorily done in a manner that is "transparent" to the developer. If you have only 0.5 ms to spare in each processing cycle and garbage collection comes along and grabs 2 ms at an uncontrollable time to clean up memory, you've blown your timeline and crashed the rocket. > Without the DBC guidelines or something equally simple and easy to > understand, I fear that such decisions simply degenerate into very fuzzy > gut-level (is it good enough) decisions. Although all devleopment > ultimately has that anyway. I really believe that DBC might have tipped the > balance. And when you have 100 assertions available to check, and throughput available to do at most 12, how do you choose with DbC which 88 to drop? This is the kind of decision that the Ariane 501 people had to make, and, based on all the evidence they had, they dropped the ones they [erroneously] thought were least at risk. Problems with getting more evidence across company and national boundaries when Dilberts and pointy-haired bosses are involved has already been discussed elsewhere in this thread. > Most importantly, in the practical world in which we work and live, I > consider DBC to be easy to understand and powerful at the same time. > Perhaps the language of the original authors you criticize was too bold or > propgandistic for your taste. > But I'm glad it struck you that way. I learned more about why I like DBC by > reading your criticisms;) > As I read the inquiry, it strikes me how a team that practices DBC routinely > would have probably made different choices. And what would "practices DBC routinely" mean in the context I described above? That if management refused to budge, the engineers quit, even though it is impossible (even just on technical grounds, never mind the political, etc. grounds--_can't_ violate physics principles) to fully satisfy DbC? The rocket needs built anyway. I am, nevertheless, in full agreement with the notion that "blind reuse is worse than no reuse" and the Ariane 501 staff did fall short. While DbC looks useful in many ways, I just don't see that lack of its [or Eiffel's] usage on Ariane 501 is the straw that broke the proverbial camel's back. I'm glad you have found something that helps you in your application domain, but one size doesn't fit all. > But I agree that there were a host of other issues, and DBC alone would not > rescue things. > -- > Best Wishes, > David Kreth Allen > Software Consultant > Minneapolis, Minnesota - USA Howard W. LUDWIG