From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-1.9 required=3.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: 4 Jun 93 22:07:47 GMT From: dog.ee.lbl.gov!network.ucsd.edu!swrinde!cs.utexas.edu!csc.ti.com!tilde.cs c.ti.com!mksol!strohm@ucbvax.Berkeley.EDU (john r strohm) Subject: Re: McCabe package for Ada? Message-ID: <1993Jun4.220747.26663@mksol.dseg.ti.com> List-Id: In article <1993Jun4.154720.23587@saifr00.cfsat.honeywell.com> shanks@saifr00.c fsat.honeywell.com (Mark Shanks) writes: >In article dani@netcom.com (Dani Zweig) writes: > >>It turned out that there was a 96% correlation between the Cyclomatic >>Complexity of a module and the number of statements in a module. The >>link is that Cyclomatic Complexity is almost perfectly correlated with >>the number of decisions in a module, the number of decisions is almost >>perfectly correlated with the number of IFs, and the *density* of IFs >>is quite consistent, with the result that the number of IFs is almost >>perfectly correlated with the number of statements. > >As I understand it, the cyclomatic complexity - extended McCabes >is a count of conditions (IFs, ELSEIFs, AND THENs, etc.) in a >procedure + 1. I've verified this on a few (20-30) procedures. >But it seems you're concluding that the complexity metric would >(could?) be correlated with the number of statements, and I >haven't had that experience. No, he said they MEASURED that correlation, using actuals from their code. >I see you are referring to a >COBOL/MIS environment; I'm using Ada. As I read what he posted, he was in an environment that was using Ada to do COBOL/MIS kinds of things. Sort of doesn't matter: McCabe complexity is not going to be affected significantly by Ada vs. COBOL. (You have learned my darkest secret: years ago, I learned COBOL, as well as the more "mainstream" languages like Pascal, FORTRAN, LISP, Ada, and C.) >>Needless to say, Cyclomatic Complexity turns out to be an excellent >>predictor of defects -- by virtue of the fact that long modules average >>more defects than short modules. All of which is to say that people >>who measure Cyclomatic Complexity probably aren't measuring what they >>think they're measuring. (McCabe's Essential Complexity has its own >>problems, but it's a *somewhat* better measure of the 'messiness' of a >>module's control structure.) > >Well, at the risk of appearing hopelessly inept, I have a problem >with high McCabe values as a necessary indicator of procedure >complexity/lack of maitainability. I have procedures that are >logically grouped by function, for example, displaying aircraft >brake antiskid failure messages. There are six possible messages, >plus the condition of blanking the display for invalid data >conditions. I am checking the status of 64 input parameters >(mostly Booleans) to determine which message/combination of messages >should be displayed. (If XXXXXX and then YYYYY then...) >Based on the nature of this McCabe metric, the count of IFs and >ELSEIFs gets quite high very quickly, yet the code is quite >simple to follow. I have the option of breaking this function up >into smaller procedures (this procedure has 186 logical lines of >code), but that wouldn't make it any less complicated or more >maintainable. Any suggestions? Should I care if the McCabe is >high but the code is obviously not complex and after exhaustive >testing has no defects? Some twenty hears ago, Halstead (sp?) originated what came to be called "software physics", but which we now call "metrics". The air was removed forcibly from their sails when someone showed that ALL of their different measures, each one supposedly measuring some fundamentally different aspect of program complexity, were strongly correlated with each other and with number of source lines of code. What this, and the McCabe measure, and the old-fashioned ruler test all seem to say is that bigger modules TEND to be problem sources, and TYPICALLY need more attention for quality, maintainability, reliability, and flavor. Exceptions exist on both ends of the scale. No metric is a substitute for a functioning set of brain cells. As for your particular procedure: if you currently have it partitioned into one decision tree that calls one of a large handful of procedures, then you have probably done all that you can. The test is this: next week, the systems engineers will come to you with a new special case that you need to tack in. How easy will it be to make the mod and test the modified module? If we repeat this five times (or fifty), what will the result look like?