From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=BAYES_00,FROM_ADDR_WS, MSGID_SHORT autolearn=no autolearn_force=no version=3.4.5-pre1 Date: 13 May 93 23:48:13 GMT From: hubcap!ncrcae!ncrhub2!ncrgw2!psinntp!witch!mlb!mbayern@gatech.edu (Mark Bayern) Subject: Re: Ada compiler for the IBM PC Message-ID: <119@mlb.win.net> List-Id: > >Sounds like you'd like to use EditAda, a Windows-based editor aimed >at Ada developers. It's a shareware product available via anonymous >ftp from Wash U archive or from the CICA (Indiana) archive. > >Shameless plug is now over. Back to the usual rabble.... :-) > Is this the same as EditAda I got last week from CLMFORUM on Compu$erve? If so it doesn't work (on my machine machine anyway). It would always give an Actor (huh??) error message 'out of disk space'. Since I've got over 50M free on this thing I don't think it is really out of space. Any ideas? How about a phone number so we can talk about it? Mark Bayern ------------------------------ 37741040@Isis.MsState.Edu> raj2@Isis.MsState.Edu (rex allan jones) writes: > Could someone please point me in the direction of a metrics package >to compute McCabe's Cyclomatic Complexity for Ada programs? Logiscope, from Verilog. Or AdaDL, from SSD (?) Or check the Ada Software Repository for something free. Somone else (I hate "vi") wrote: >Not that this answers your question, but since you bring it up :) > >Who is using McCabe, and why? What values are construed as "acceptable", >"requires further investigation", and "this code is rejected"? How >were these values derived? Enquiring minds want to know... McCabe scores are not by themselves a very useful indication of Ada quality. However, they can be of some use. When someone hands me 2500 lines of code written by the guy that just quit, I look first at the routines with the highest McCabe scores. Almost always, if the score is over twenty, I can bring it down to 15 or less with some thoughtful re-design. Not everyone can make sense out of spaghetti code like that, so I see it as a "service" to those that come after me--even if the code was already meeting its requirements (of course I keep the old version for safety :-) ) As far as what the values SHOULD be: I have had to work on code written by many other people. Consequently I have a good idea of who writes "good stuff" and who writes "bad stuff" So I can get a list of the McCabe scores for all of Fred's units in one list, all of Frieda's in another, etc. and compare. The averages are usually QUITE different. Wes G. P.S. One thing McCabe doesn't show is what I call "hidden" complexity. Hidden, because it doesn't show on a flow chart or path graph. This kind of complexity is produced by a technique more than one of my former colleagues was fond of: That of declaring lots of "flag" variables, and using them to control branching at a point far away from where they were set. I'd like to see someone come up with a McCabe-like metric where if dotted lines are drawn on a control graph connecting the setting of one of these flags with all tests that depend on that setting, the metric would be affected by these lines and their nodes. Whew! What's the complexity metric of that last sentence?!?!?