From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,48942bcd105c88c6 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Computer beats Kasparov Date: 1996/02/26 Message-ID: <3131C32A.391C@lfwc.lockheed.com>#1/1 X-Deja-AN: 141265345 references: <4g29e2$ea0$1@mhadg.production.compuserve.com> <4gmbdi$rib@toads.pgh.pa.us> <4gqufh$m71@cliffy.lfwc.lockheed.com> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.0 (Macintosh; I; 68K) Date: 1996-02-26T00:00:00+00:00 List-Id: Robert Dewar wrote: > > "As long as generated code is identical each time you compile the same > code, it doesn't matter if the code generator uses AI. We use an Ada > compiler to generate safety critical embedded SW and have seen > code generation errors with code generators using common optimization > techniques." > > Well the issue of whether the code generator "uses AI" (whatever the > heck that might mean) is a red herring. > > The issue is whether the code generated is "reviewable" in the sens > of annex H of the RM. Achieving reviewability may involve inhibiting > some optiizations (regardless of how they are done). I think what Mike was trying to say was that using extremely complex optimization techniques -- including, possibly, using AI-type heuristics -- to try to capture the process by which experienced programmers generate "tight" assembly code would not necessarily be a problem from a safety-critical standpoint. Assuming that Reviewable would give you information to understand the relationship of the generated object code to the source (which is what I expected it to do), then such advanced optimizations may be tolerable in safety-critical applications. This assumes, of course, that the Ada toolset generates the same code given the same initial conditions (a set of source code compiled in some determinable and consistent order, I guess). The task would be more complicated if, for example, the toolset "learns" with each compilation, such that compiling the same code six months later generates "tighter" but possibly incorrect code. The bottom line is that we don't usually know exactly how the toolset does optimizations, and don't care (within some limits). We assume that we will have to validate the resulting code, using Reviewable and other techniques, to assure its reliability and safety regardless. The key phrase with respect to disabling optimizations, from the Rationale: "...some optimizations could be disabled when the pragma Reviewable is in force, rather than enhancing the compiler to meet the requirements with full optimization." With Ada 83, we pay to get these "enhancements," and I suspect that we will continue to do so with Ada 95. As a result, we would not disable optimizations to get reviewable code. After all, safe code that won't fit in the box is a little _too_ safe for our needs!