From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,FORGED_GMAIL_RCVD, FREEMAIL_FROM autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,595c75298fbdce96 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,CP1252 Path: g2news1.google.com!postnews.google.com!u26g2000yqu.googlegroups.com!not-for-mail From: Midoan Newsgroups: comp.lang.ada Subject: Re: Is Aunit helpful? Date: Sun, 15 Aug 2010 14:47:21 -0700 (PDT) Organization: http://groups.google.com Message-ID: <32dc1191-0a83-40ef-8bbc-a13a06f2167e@u26g2000yqu.googlegroups.com> References: <8a1e58c0-2330-4475-8013-97df103dd85e@o19g2000yqb.googlegroups.com> <82r5ids1o9.fsf@stephe-leake.org> <20100805211820.52c18cb5.tero.koskinen@iki.fi> <8d166cfb-4850-42b6-ac25-d9ac00df7565@q35g2000yqn.googlegroups.com> <82ocd5wukf.fsf@stephe-leake.org> <3957496a-af4b-45f5-87c9-327b22d19f08@x21g2000yqa.googlegroups.com> <82eie0vzyd.fsf@stephe-leake.org> NNTP-Posting-Host: 188.141.92.189 Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Trace: posting.google.com 1281908842 23586 127.0.0.1 (15 Aug 2010 21:47:22 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Sun, 15 Aug 2010 21:47:22 +0000 (UTC) Complaints-To: groups-abuse@google.com Injection-Info: u26g2000yqu.googlegroups.com; posting-host=188.141.92.189; posting-account=X24XNwoAAACSn_ecescZSCM9-2ONsCM_ User-Agent: G2/1.0 X-HTTP-UserAgent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.33 Safari/534.3,gzip(gfe) Xref: g2news1.google.com comp.lang.ada:13368 Date: 2010-08-15T14:47:21-07:00 List-Id: On Aug 15, 1:10=A0pm, Stephen Leake wrote: > Midoan writes: > > On Aug 14, 6:57=A0am, Stephen Leake > > wrote: > >> "Yannick Duch=EAne (Hibou57)" writes: > > >> > About AUnit: just seen about what it is, how it is set up and how it > >> > works. Seems a question is still pending : =93how to be sure the tes= t > >> > cover =A0all relevant case ?=94. I do not see a way to be sure testi= ng > >> > cover all =A0cases. > > >> Correct, AUnit does not do that. gcov does, although I have not used i= t > >> very much. It can be difficult to use the output of gcov. > > >> > That is the main limitation of this kind of approach. > > >> What alternative approaches provide coverage information? > > > FYI, note that with Mika (http://www.midoan.com/), the automatic test > > data =A0generator for Ada, it is possible to take in your existing test > > cases, check the coverage achieved, and automatically generate missing > > test inputs and expected test result > > How can a tool possibly generate expected results? If it reads the code, > it can only generate the results that the code _will_ produce. But > that's the opposite of a test; the expected results are what the code > _should_ produce, based on some other spec (not the Ada spec). A testing > process _must_ assume the code is wrong. Yes of course you are right; and that's what we meant. What we meant was that the code's result must always be validated externally of the tool. Mika will simply ask the question "given these inputs, are those expected outputs?". Those outputs, that Mika generates, will be according to the code under test: it will include all side effects (i.e. including non local variables changes) ... In our experience, Mika will flag many side effects not necessarily intended by the developers, never mind the designers ... If the code's outputs is however confirmed by the human oracle then it can join a set test cases suitable for regression testing. If the code is subsequently changed then Midoan can flag changed behavior and/or complete the test inputs set; thus being much useful during developement, maintenance (i.e. regression) testing. > If the spec is machine readable, then the tool has a chance. But I see > no mention of machine readable specs on the midoan site. > > Hmm. If the Ada spec includes pre/postconditions (Ada 2012), then some > meaningful tests can be generated, but the compiler will already do > that. > > > (which can be validated to form new test cases automatically). > > validated by what? If that means "reviewed by a human", that might be > ok. But there would be a very strong temptation to say "the tool must be > right". > > > This can be done to achieve branch, decision or MC/DC coverage as > > desired. > We are very open to inquiries about machine readable, commercially sustainable (preferably!), specification languages as used in combination with Ada (any specification language based on Ada's syntax is particularly welcome :-) ). In fact disproving specified behavior would be a very attractive proposition for us ... But, in our experience, most specifications (and that includes code generated (!) specifications used by fashionable model based verification techniques as provided by Polyspace and such like) only capture a fraction of the code's behavior; e.g. not all packages' variables are modeled. In practice, and for example, Ada 2012's post-conditions are unlikely to be used to capture the full effect of commercial, safety critical, code ... That, to our eyes, is a major weakness in specification based verification : the specification may say 'A' but the code may do 'A and B' ... and trying to model 'B' in the specification typically leads to specifications as complex as the code! So we, as a gist conclusion, at Midoan, still believe in the primacy of human checked, tool assisted, systematic software testing. > Generating a scaffold that gives coverage could be useful, but it must > be completed manually, to ensure correct results. > Agreed, indeed. > -- > -- Stephe Regards, The Midoan Team at http://www.midoan.com/