From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 2002:a02:4f4a:: with SMTP id c71mr4516568jab.17.1553842673783; Thu, 28 Mar 2019 23:57:53 -0700 (PDT) X-Received: by 2002:aca:4e0c:: with SMTP id c12mr2520849oib.0.1553842673614; Thu, 28 Mar 2019 23:57:53 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!feeder.eternal-september.org!weretis.net!feeder6.news.weretis.net!feeder.usenetexpress.com!feeder-in1.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!78no69132itl.0!news-out.google.com!l81ni85itl.0!nntp.google.com!136no68936itk.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Thu, 28 Mar 2019 23:57:53 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=165.225.84.79; posting-account=bMuEOQoAAACUUr_ghL3RBIi5neBZ5w_S NNTP-Posting-Host: 165.225.84.79 References: <6e1977a5-701e-4b4f-a937-a1b89d9127f0@googlegroups.com> <6f9ea847-2903-48c8-9afc-930201f2765a@googlegroups.com> <87a7hgvxnx.fsf@nightsong.com> <4e240c66-dce8-417f-9147-a53973681e29@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <28b6a472-6c3a-40a6-8a96-2e27a65ab2ef@googlegroups.com> Subject: Re: Intervention needed? From: Maciej Sobczak Injection-Date: Fri, 29 Mar 2019 06:57:53 +0000 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Xref: reader01.eternal-september.org comp.lang.ada:55990 Date: 2019-03-28T23:57:53-07:00 List-Id: > > Which brings a potentially interesting question - what if the reasoning= in my head has a continuous measure of correctness? Like, say, I'm 95% con= fident that the reasoning is correct? >=20 > This is not how reasoning work. This is not how some particular form of reasoning works. But I have read about ideas that even formal reasoning could exist in forms= that trade efficiency against correctness. Think of it as an analog of, sa= y, image processing algorithms, which also can trade speed against resoluti= on. So you could have your program and various (continuous?) ways of choosing b= etween fast and exact and maybe your particular program is so complex that = exact reasoning would take more than a lifetime, whereas a reasonably high = level of confidence (but <100%) could be achieved in a timeframe that is ac= ceptable for you and your customer. What would you do? Today you just ditch formal methods whatsoever and go on with laborious tes= ting. As if that was any wiser. > But human reasoning is nowhere stochastic. Paul asked for machine verification, so the limitations of human reasoning = need not bother us. > BTW, a digital system has an advantage that you can (theoretically)=20 > ensure some states never entered. But I'm not talking about system states, but about our confidence that a gi= ven state will be achieved or not. States can be discrete, but confidence (= and therefore so called "certification credit") might have a continuous mea= sure. Of course, we don't yet have the formal methods with such capability,= but maybe the methods we have now are not fit for what we are trying to do= with them anyway. --=20 Maciej Sobczak * http://www.inspirel.com