From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.2 required=5.0 tests=BAYES_00,INVALID_MSGID, TO_NO_BRKTS_FROM_MSSP autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,9a0ff0bffdf63657 X-Google-Attributes: gidfac41,public X-Google-Thread: f43e6,9a0ff0bffdf63657 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,4b06f8f15f01a568 X-Google-Attributes: gid103376,public X-Google-Thread: 1108a1,9a0ff0bffdf63657 X-Google-Attributes: gid1108a1,public From: Phil Goodwin Subject: Re: Software landmines (loops) Date: 1998/09/03 Message-ID: <6smrvi$86c$1@nnrp1.dejanews.com>#1/1 X-Deja-AN: 387560882 References: <902934874.2099.0.nnrp-10.c246a717@news.demon.co.uk> <6r1glm$bvh$1@nnrp1.dejanews.com> <6r9f8h$jtm$1@nnrp1.dejanews.com> <6renh8$ga7$1@nnrp1.dejanews.com> <6rf59b$2ud$1@nnrp1.dejanews.com> <6rfra4$rul$1@nnrp1.dejanews.com> <35DBDD24.D003404D@calfp.co.uk> <6sbuod$fra$1@hirame.wwa.com> <35f51e53.48044143@ <904556531.666222@miso.it.uq.edu.au> <6sf87j$47n$1@hirame.wwa.com> <6shp1i$ead$1@nnrp1.dejanews.com> <35EC937F.94420C51@ibm.net> <6sk01j$1qn$1@nnrp1.dejanews.com> <6skerj$1u5$1@hirame.wwa.com> X-Http-Proxy: 1.0 x12.dejanews.com:80 (Squid/1.1.22) for client 38.222.136.64 Organization: Deja News - The Leader in Internet Discussion X-Article-Creation-Date: Thu Sep 03 19:51:13 1998 GMT Newsgroups: comp.lang.eiffel,comp.object,comp.software-eng,comp.lang.ada X-Http-User-Agent: Mozilla/4.05 [en] (WinNT; I) Date: 1998-09-03T00:00:00+00:00 List-Id: Comments interspersed... In article <6skerj$1u5$1@hirame.wwa.com>, "Robert Martin" wrote: > > Phil Goodwin wrote in message <6sk01j$1qn$1@nnrp1.dejanews.com>... > > > >I would posit that if you are changing the function at all you must do a > >complete regression test on it. I grant you that the more you change the > >routine the more likely you are to introduce bugs and that is an important > >consideration. However, what we are doing here is deffering the risk of > >creating a more complicated algorithm to the point in time where we know > that > >the risk is worthwhile. > > Unfortunately, by the time you know that the structure is wrong, it is often > difficult to justify making it right. It is almost always easier to find a > clever fix than it is to restructure so that the clever fix isn't necessary. > (witness the windowing technique for fixing Y2K bugs). Then you must either arrange things so that you never have to restructure or you must have some way to know when to restructure rather than make a 'clever fix'. > Now, you might > suggest that once faced with the choice of clever fix or refactoring, one > should always choose refactoring. But I will respond to that with two > points. I might suggest that, but, as it happens I will not... > 1. Your premise is that you shouldn't pay for a risk you aren't sure will > materialize. Since you don't know that you'll need another clever fix, your > premise will lead you to simply make the current clever fix. > > 2. Multiple returns *were* the first clever fix. > > If you look at these two statements carefully you will realize that they > form the essense of inductive reasoning; leading to the conclusion that true > refactoring will never happen. Right, the premise that you should Do The Simplest Thing That Could Possibly Work isn't powerful enough to enable decisions in all cases. There is another useful premise that the Extreme Programming guys call "Once And Only Once" that is used to trigger refactoring. In short, in prohibits 'clever tricks' that lead to code duplication. The Once And Only Once rule will save us from making the mistakes that you have presented in your examples. It may turn out to be insufficient to overcome all the shortcomings of Do The Simplest Thing That Could Possibly Work, but I'm not aware of any examples where it doesn't. > I take slightly different view. If the cost of protecting myself from risk > is low, and if the cost of the risk materializing is high, then I will pay > for the risk up front. It's like buying insurance. > > >The only other option is to assume that the risk will > >always turn out to be worthwhile and code the more complicated algorithm in > >every case. > > Again, if cost of the more complex algorithm is low, and if the cost of the > risk is high, then this may not be a bad decision. > > >My position is not that strict structured programming has no benefit, it is > >that it has a cost and that the cost is not justified unless the benefit is > >recieved. > > Is the cost of your medical insurance, or your auto insurance, or your life > insurance justified? > Of course it is! (and you should see my health insurance rates!) The > reason it is justified is that the potential downside is enormous. > > So the justificaton of a maintainable approach must be that the downside > potential is high enough to get us to gladly pay the up front costs of > protection. So, you are saying that, in the cases where the risk DOES turn out to be worthwhile the value is so great that it outweighs the cost in all the other cases where the precaution turned out to be unnecessary? I think that this is sound reasoning, but it hinges on the premise that the realized value is very great. So you insure your home but not your plastic lawn furniture. I advocate for using SE/SE on large complicated functions, but not always on small simple ones. The message that I'm interested in conveying is not that the rule is bad and should be thrown out, but rather that it shouldn't be rigidly applied in every circumstance. Knowing why the rule exists at all and what benefit following it confers is IMHO far more important than always following it to the letter because it's a Good Thing. Phil -----== Posted via Deja News, The Leader in Internet Discussion ==----- http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member Forum