From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,d89b08801f2aacae X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-05-01 11:23:04 PST Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-06!supernews.com!news.tele.dk!small.news.tele.dk!207.115.63.138!newscon04.news.prodigy.com!newsmst01.news.prodigy.com!prodigy.com!postmaster.news.prodigy.com!newssvr21.news.prodigy.com.POSTED!not-for-mail From: tmoran@acm.org Newsgroups: comp.lang.ada Subject: Re: Is strong typing worth the cost? References: X-Newsreader: Tom's custom newsreader Message-ID: NNTP-Posting-Host: 67.112.202.119 X-Complaints-To: abuse@prodigy.net X-Trace: newssvr21.news.prodigy.com 1020277349 ST000 67.112.202.119 (Wed, 01 May 2002 14:22:29 EDT) NNTP-Posting-Date: Wed, 01 May 2002 14:22:29 EDT Organization: Prodigy Internet http://www.prodigy.com X-UserInfo1: OH\IRYOGTRUSP_@YMZJ\_Q\@TJ_ZTB\MV@BD]\YIJYWZUYICD^RAQBKZQTZTX\_I[^G_KGFNON[ZOE_AZNVO^\XGGNTCIRPIJH[@RQKBXLRZ@CD^HKANYVW@RLGEZEJN@\_WZJBNZYYKVIOR]T]MNMG_Z[YVWSCH_Q[GPC_A@CARQVXDSDA^M]@DRVUM@RBM Date: Wed, 01 May 2002 18:22:29 GMT Xref: archiver1.google.com comp.lang.ada:23358 Date: 2002-05-01T18:22:29+00:00 List-Id: >So we might say, "There is no rigorous, *scientific* proof of the >OTOH, there actually *is* some half-way scientific evidence that Ada *does* The relevant question is "what is likely to make me money". There's rarely "rigorous, *scientific* proof" available (except on late night TV), so you go with half baked evidence and statistical probabilities. That's what "statistical decision theory" is about. You have a set of probabilities and costs, eg, 1% chance strong typing will double productivity, 10% chance it will improve productivity by 20%, 69% chance it will make no difference, and 20% chance it will decrease productivity by 15%. Given those numbers, Ratio of new expected Prob Productivity value +0.01 200% 2 +0.10 120% 12 +0.69 100% 69 +0.20 85% 17 = 100 % the expected value of strong typing is the same productivity you had before, ie, no improvement. But if you spend money on an experiment and improve your estimates to, say, no chance of a doubling of productivity, but a 15% chance of a 20% improvement, and a 20% chance of only a 10% loss, then Ratio of new expected Prob Productivity value +0.00 200% 0 +0.15 120% 18 +0.65 100% 69 +0.20 90% 18 = 105% Let's assume the experiment is 50-50 likely to give those results, or to leave the original estimates untouched. Then the expected value of the experiment is a 2.5% improvement in your company's software costs - for the foreseeable future. You ought to be willing to spend up to the discounted present value of that improvement for such an experiment. If your company expects to employ 10 programmers for 10 years and has a 10% discount rate, your future software costs are 10+.9*10+.9**2*10+...+.9**9*10 = 65 man years, 2.5% of that is 1.625 man years, which is what it's worth to your company to perform the experiment. If you have 100 programmers, it's worth investing 16 man years in the experiment. So the obvious question is "why haven't large companies with many programmers and a long time horizon, done the experiment yet?"