From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=0.8 required=3.0 tests=BAYES_50 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: 26 May 93 22:42:21 GMT From: sparky!sparky!dwb@uunet.uu.net (David Boyd) Subject: Re: McCabe package for Ada? Message-ID: <1993May26.224221.24676@sparky.imd.sterling.com> List-Id: In article , groleau@e7sa.crd.ge.com (Wes Groleau X7574) writes: |> In article <1993May24.003302.27291@mole-end.matawan.nj.us> |> mat@mole-end.matawan.nj.us writes: |> >In article <1993May19.220953.20443@sparky.imd.sterling.com>, |> dwb@IMD.Sterling.COM (David Boyd) writes: |> >> In article , groleau@e7sa.crd.ge.com (Wes |> Groleau |> >> X7574) writes: |> |> I did not write what follows, in case anyone cares. |> |> >> Actually the McCabe folks have come up with some metrics for just |> >> that situation. ... pathological complexity (pv(G)) |> >> which measures those truely awfull conditions in code |> >> such as branches into loops or decisions. You know the stuff you shoot |> >> people for or would only expect from a 10 year olds first program. |> |> This is NOT the situation I was talking about. I was talking about code |> that "fools" a control analyzer into giving it a low complexity rating. |> Instead of having ridiculously complex decisions and convoluted nesting, it |> has "simple" one or two level control structures. Unfortunately, it's not |> any |> easier to understand, because it's sixteen pages long and the coding thought |> process goes like this: |> LOOP |> "Hmm, I'm gonna need to make a decision based on these two fields sometime |> further down the file, so I'll check the condition now and throw the |> answer into this boolean (or enumerated, or integer) variable." |> "OK, now I need to loop the number of times I saved in...which scratch |> variable did I put that in?" |> "And inside this loop I need to create and save another control value |> for the decision in the exception handler..." |> "Now I need a case statement on the variable I assigned by calling that |> subprogram back there ... |> [ more of the same ] |> exit LOOP and hope it works when I can no longer figure out where I am... |> END LOOP |> |> If I had a nickel for the number of times somebody said "I don't know why |> I did that. Must be needed somewhere else." ... I am not certain about how my response above got fitted with the situation that Wes is writing about, it was intended to go with different posting. (I probably screwed up and followed up the wrong article). In any case, I also dread the type of code that Wes is talking about. In the case Wes is talking about the metric needed is the cyclomatic complexity of a module in relation to some set of data elements. Basically it would be a count of the number of paths through the the module which touch one or more of the data elements. For data elements local to the module, this would be very tough to automate computation of since you would only care about some sub-set of the local variables (i.e. you really don't care much about loop counters) and choosing that subset would require examining the software. I would look at values of this number which were close to or equal to the modules total cyclomatic complexity as being an inditcator that something might be wrong. In any case, this type of situation is again very tough to detect automatically. About the only thing that can be done in Wes's case (assuming you have already decided to look at the module) is to break out your trusty data cross reference browser and figure out where that "somewhere else" is located. The case I find even more apalling and which makes the software even harder to maintain is when the variables Wes is talking about are global and the setting and testing of the variables is done in multiple modules. The advantage this case has is that the complexity of a module can be computed with respect to a set of variable based on the variable's scoping. Since, the scope of a variable can be easily determined via static analysis this type of metric could be easily computed automatically. This whole discussion only goes further to re-enforce a point I have long accepted that there is no single metric which will tell good code from bad code and that the best one can hope for is a set of metrics which when taken together can be used to help detect most cases. In any case you will never detect all cases of unmaintainable, untestable, etc code. Given a set of metrics most good programmers could write some module(s) which look good to the metrics but which are poor in a number of other ways. This is why there is no substitute for a good thorough examine of the code by several people for detecting that sort of flawed engineering. -- David W. Boyd UUCP: uunet!sparky!dwb Sterling Software IMD INTERNET: dwb@IMD.Sterling.COM 1404 Ft. Crook Rd. South Phone: (402) 291-8300 Bellevue, NE. 68005-2969 FAX: (402) 291-4362