From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,e5cbd8c12c7e53bc X-Google-Attributes: gid103376,public From: "Marc A. Criley" Subject: Re: Lines-Of-Code Utility? Date: 1998/11/05 Message-ID: <364199CC.65FFBEBE@lmco.com>#1/1 X-Deja-AN: 408645388 Content-Transfer-Encoding: 7bit References: <3642c88c.24407137@news.geccs.gecm.com> <71prhl$uod$1@nnrp1.dejanews.com> Content-Type: text/plain; charset=us-ascii Organization: Lockheed Martin M&DS Mime-Version: 1.0 Newsgroups: comp.lang.ada Date: 1998-11-05T00:00:00+00:00 List-Id: dewarr@my-dejanews.com wrote: > > In article , > "joecool" wrote: > > > > I need a lines-of-code utility that will run on NT and produce reports on > > lines-of-code for Ada 95. It would be nice if it can run recursively on a > > specified directory structure. > > > > If there are various standards for reporting lines of code for Ada 95, I > > would like to know them. I am looking for any information anyone can > > provide on such utilities. > > Lines of code is always a dubious metric, since it is so variable. Capers Jones, software metrics guru, had the following to say about counting source lines in the October 98 issue of Crosstask (http://www.stsc.hill.af.mil/CrossTalk/1998/oct/letters.html): "Elizabeth Starrett's article, "Measurement 101," Crosstalk, August 1998, was interesting and well written, but it left out a critical point. Metrics based on "source lines of code" move backward when comparing software applications written in different programming languages. The version in the low-level language will look better than the version in the high-level language. "In an article aimed at metrics novices, it is very important to point out some of the known hazards of software metrics. The fact that lines of code can't be used to measure economic productivity is definitely a known hazard that should be stressed. "In a comparative study of 10 versions of the same period using 10 different programming languages (Ada 83, Ada95, C, C++, Objective C, PL/I, Assembler, CHILL, Pascal, and Smalltalk), the lines of code metric failed to show either the highest productivity or best quality. Overall, the lowest cost and fewest defects were found in Smalltalk and Ada95, but the lines of code metric favored assembler. Function points correctly identified Smalltalk and Ada95 as being superior, but lines of code failed to do this." Therefore one might consider the "lines-of-code utility" to be low :-) (The following is directed towards the bean-counters, not the metricians.) So why is SLOC still the most common basis for productivity measurements? Because it's the easiest thing to measure. The results are inaccurate, and can easily mislead, but it's easy and cheap. Function points or other metrics that depend on the content of the software require more work to extract. And while they may be more accurate, and provide more insight into development and productivity, resulting in more accurate budgeting, scheduling, and more efficient application of resources to reduce life cycle costs, well... it's easier to count semi-colons. -- Marc A. Criley Chief Software Architect Lockheed Martin ATWCS marc.a.criley@lmco.com Phone: (610) 354-7861 Fax : (610) 354-7308