From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=BAYES_00,INVALID_DATE, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 Path: utzoo!attcan!uunet!tut.cis.ohio-state.edu!ucbvax!MBUNIX.MITRE.ORG!munck From: munck@MBUNIX.MITRE.ORG (Bob Munck) Newsgroups: comp.lang.ada Subject: O-O and general S/W Metrics Message-ID: <27053.618762611@mbunix> Date: 10 Aug 89 14:30:11 GMT Sender: daemon@ucbvax.BERKELEY.EDU Reply-To: munck@MITRE.org Organization: The Internet List-Id: Boy, I've got to agree with Ralph Johnson (johnson@p.cs.uiucf.edu): I am usually suspicious of software metrics. In general, we do not know what to measure of software. We really need to measure the IDEAS in software, not the lines of code, though nobody knows how to do that. It is, I believe, possible to ASSESS the quality of software by careful and laborious reading and studying of it. A very good programmer with extensive experience in long-term, multiple-person projects can, I believe, predict quite accurately the additional time needed to put a piece of software into production use and the relative cost of maintaining and enhancing it. The entire field of software metrics seems to me to be an attempt to discover mechanical short-cuts to do this, and I've never seen one that convinced me it could. Likewise, the current fad of O-O* seems to me a similar attempt at short-cutting an intrinsically difficult job. A favorite book of mine, "Zen and the Art of Motorcycle Maintenance," makes the point that quality is not quantitative; we can recognize it, but we cannot completely characterize it. The little rules we make up, and then measure with metric tools, are never absolutely true. Given such a tool, I can always write a bad program that it will rate as good, and usually a good program that it will rate as bad. Finally, I've never yet seen an attempt to validate a software metric under "real world" conditions. That is, I've never seen a comparison of the output of a metric tool to the actual life-cycle costs of the code it was applied to. Likewise, I've never seen a demonstration that O-O (or Ada, for that matter) reduces life-cycle costs. The closest thing I've seen was Larry Weissman's graduate work at U of T two decades ago (not the gerbil study). -- Bob Munck * When I read "O-O" or "OO" (for "object oriented"), I always hear Fred Gwynn (not sure of the name) in "Car Fifty-four, Where Are You" saying his famous "Ooh! Ooh!" It seems to fit.