From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.4 required=5.0 tests=AC_FROM_MANY_DOTS,BAYES_00 autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,d89b08801f2aacae X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-04-30 21:08:52 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!news.tele.dk!small.news.tele.dk!212.74.64.35!colt.net!newspeer.clara.net!news.clara.net!psiuk-p2!psiuk-p3!uknet!psiuk-n!news.pace.co.uk!nh.pace.co.uk!not-for-mail From: "Marin David Condic" Newsgroups: comp.lang.ada Subject: Re: Is strong typing worth the cost? Date: Tue, 30 Apr 2002 13:45:20 -0400 Organization: Posted on a server owned by Pace Micro Technology plc Message-ID: References: <4519e058.0204290722.2189008@posting.google.com> <3CCE8523.6F2E721C@earthlink.net> <3CCEB246.9090009@worldnet.att.net> NNTP-Posting-Host: dhcp-200-133.miami.pace.co.uk X-Trace: nh.pace.co.uk 1020188722 14845 136.170.200.133 (30 Apr 2002 17:45:22 GMT) X-Complaints-To: newsmaster@news.cam.pace.co.uk NNTP-Posting-Date: 30 Apr 2002 17:45:22 GMT X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 5.50.4522.1200 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4522.1200 Xref: archiver1.google.com comp.lang.ada:23312 Date: 2002-04-30T17:45:22+00:00 List-Id: Experiments in software engineering could get very expensive. Maybe not as expensive as high-energy particle physics, but pretty expensive. In order to be meaningful, they would also need to be of sufficient size that they would become very time consuming - making possible results rather dated. Consider that to test the hypothesis: "Strong Typing In A Language Improves Productivity And Reduces Errors" you would need the following: 1) A set of requirements for a project that should result in software of 100,000 slocs or better. (Small projects aren't likely to show much benefit, nor are they realistically representative of real world programming.) 2) Two designs to those requirements that are as close to identical as possible. One based on a strong typing model and the other relying on raw machine types. (This may not even be possible since that really reflects two different design philosophies.) 3) Two language compilers that support identical languages except that in one, you have strong type checking and in the other, you have just machine types with automatic conversions, etc. (You can't just say "use the standard types in Ada" because even those are strongly typed and eliminate a whole class of errors on their own. You'd need to have something like Ada with C type checking. I'm not convinced you could get there without destroying other aspects of the language that would then alter the validity of the results.) 4) Two teams of reasonably similarly experienced engineers (Does the test prove the hypothesis for inexperienced students or for experienced engineers? Did they know the languages in question before they started the project or did they have to learn them? It isn't simple, is it?) The engineers would have to be available full-time for the duration of the project (6 months, to be realistic?) so start thinking of salaries and how to get engineers to forgo/interrupt promising careers to be part of the study. 5) Identical development environments - the only variable being the language support for the type model. (Is that fair? What about compiler bugs? Could they account for delays, reported errors, etc?) 6) A well defined development process that is adhered to by both groups. (Errors are injected by sloppy CM, lack of code reviews, etc. How tightly is strong typing tied to development process anyway? Are weakly typed languages used in places with no identifiable process and strongly typed languages used by developers with well organized and followed procedures?) 7) A rigorous test suite that can be applied to both end products to determine compliance with requirements and existence of errors. (You can't write a test suite to detect all errors, but its got to be thorough enough to demonstrate the most eggregious ones or its not going to demonstrate the hypothesis.) Even when you get done with all that, we can still cast serious stones at whatever results are demonstrated: 1) Its only one trial - To reflect reality, it ought to be conducted lots of times with different teams at different skill levels. 2) Its only one problem domain. Type checking might be real important in one problem domain and not a big deal in another. 3) Whatever the project looks like, its artificial and doesn't reflect the kinds of things that need to be done or dealt with in a real project. (Things like shifting requirements, vague problem definitions, needing to meet deadlines, etc.) 4) Whatever the results are, they reflect the technology available at the start of the research project (6 months? A year before the results are ready?) and that is a long time in Computer Years. Would better debuggers, faster computers, automatic code generators, analysis tools, etc. that are available today have changed the outcome? All in all, its a lot simpler and less costly to deal with questions like "What happens if I drop different sized cannonballs off a tall building?" than it is to ask questions about software engineering. Not that I'd be against asking those questions and trying to come up with experiments to investigate them, but you've got to admit that doing so in anything like a scientific manner would cost a LOT of money. Its probably worth doing, but who would pay? (I'd personally love to have a research grant to conduct a study like that. It would get me out of hacking C code for a living! :-) Question: What I outlined here is pretty rigorus. Would you accept as being useful or valid some kind of experiment that didn't require such controlled conditions? MDC -- Marin David Condic Senior Software Engineer Pace Micro Technology Americas www.pacemicro.com Enabling the digital revolution e-Mail: marin.condic@pacemicro.com "dmjones" wrote in message news:Xns9200B524E1781derekknosofcouk@62.253.162.109... > > Is experimental evaluation a dead duck in software engineering? > > http://citeseer.nj.nec.com/lukowicz94experimental.html