From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,93a8020cc980d113 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Newsgroups: comp.lang.ada Subject: Re: What is wrong with Ada? References: <1176150704.130880.248080@l77g2000hsb.googlegroups.com> <461B52A6.20102@obry.net> <461BA892.3090002@obry.net> <82dgve.spf.ln@hunter.axlog.fr> <1176226291.589741.257600@q75g2000hsh.googlegroups.com> <4eaive.6p9.ln@hunter.axlog.fr> <1176396382.586729.195490@y5g2000hsa.googlegroups.com> From: Markus E Leypold Organization: N/A Date: Mon, 16 Apr 2007 16:50:55 +0200 Message-ID: User-Agent: Some cool user agent (SCUG) Cancel-Lock: sha1:+A+yw36x2u5ezcHIXKp7sFGywjA= MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii NNTP-Posting-Host: 88.72.208.73 X-Trace: news.arcor-ip.de 1176734602 88.72.208.73 (16 Apr 2007 16:43:22 +0200) X-Complaints-To: abuse@arcor-ip.de Path: g2news1.google.com!news2.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!newsfeed01.sul.t-online.de!t-online.de!newsfeed.arcor-ip.de!news.arcor-ip.de!not-for-mail Xref: g2news1.google.com comp.lang.ada:15070 Date: 2007-04-16T16:50:55+02:00 List-Id: "Bob Spooner" writes: > "Markus E Leypold" > wrote in > message news:n3r6qpea4p.fsf@hod.lan.m-e-leypold.de... >> >> "kevin cline" writes: >> >>> Are you claiming that use of Ada makes it safe to release code that >>> has never been tested? >> >> Actually -- why not? In my experience spuriously tested Ada code >> usually has the quality of extensively tested C code. Perhaps my cases >> where not really comparable, but it is astonishing how much Ada code >> is "just right" if it compiles. >> >> Of course we're not talking about embedded programs / control systems >> of whatever kind here. And those should probably better verified >> and/or extenseively formally reviewd instead of only tested. >> >> Remember: Formal review is a (proven) much better QA tool than >> testing. >> > This has been my experience as well. When I was first learning Ada, back I'd like to add to my statement and your experience that there is some IBM study about this: A certain level of quality could only be reached by mixes of QA methods and all those had formal review in them, whereas those levels could not be attained by any combination of the other methods without formal review. This is a sobering realization, considering that many code in most companies I know is never seen by more than one pair of eyes. I could look up the study, but the notes are currently out of my reach. > around 1986 or so, I was astonished at the percentage of programs I wrote > which, once I had a clean compile, did _exactly_ what I expected. It's borne > out as well in the form those "stump the experts" sessions take. With C type > languages, the question is: "What does this do?" With Ada, the question is: > "Will this compile?" As with almost every other strongly and statically typed language (I've made similar experience with Turbo Pascal (when I was young), and with Ocaml). Still, the relationship between program and specification and specfification and requirements and requirements and what the customer wants/says her wants, must be checked (reviewed), because the compiler won't do that for you. The fact that static type systems are so effective in reducing the bug rate in the development cycle, in my eyes indicates how many bugs (in C, C++, Assembler) are simple omission (forget to set or increment a variable) or typos (assign instead of compare, forget to dereference a pointer or use the wrong pointer at some place). Instead of review (if one isn't working in a larger team, e.g. students or "garage developers): If one has the discipline one very useful exercise is to write a module, let it lay around for a week or 14 days and then write interface documentation for it. It forces one to reconsider the code in question. Of course I'm talking about empirically grown code here or code that has be factored out from other code, not something that has gone through a NASA style development process. Since this happens all the time in reality anyway, the least one can do, is to look at it twice, instead of insisting that this should never happen and stop at that, as some people do. Regards -- Markus