From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,8a4455177648cb9e X-Google-Attributes: gid103376,public From: Stuart Palin Subject: Re: Idea: Array Boundary Checks on Write Access Only Date: 1998/06/18 Message-ID: <3588DBF0.7720@gecm.com>#1/1 X-Deja-AN: 363802192 Content-Transfer-Encoding: 7bit References: <35851B64.5BF271C4@cl.cam.ac.uk> <35880D14.AC0243A@cl.cam.ac.uk> Content-Type: text/plain; charset=us-ascii Organization: GEC-Marconi Avionics Mime-Version: 1.0 Newsgroups: comp.lang.ada Date: 1998-06-18T00:00:00+00:00 List-Id: Markus Kuhn wrote: > > Stephen Leake wrote: > > I don't see why a "read bug" is ever ok! > > An array read bug has only local consequences, a wrong return Not necessarily so; two counter-examples spring to my mind: 1) A read access outside the permissible memory bounds for the process causing an Operating System generated access violation which results in the whole process being terminated. This sort of problem might occur on a multi-user computer system (e.g. VAX) the operating system has a need to protect independent processes; or even on 'embedded' systems where a security policy is enforced by the OS. 2) The read may access memory mapped hardware devices and may cause the device to react; for example an IO device may assume that the last set of data has been accessed and it initiates the next IO transaction - resulting in lost data. This sort of problem might occur in embedded system where elaborate protection mechanisms (see case 1 above) are not built into the system. It would be possible to design systems that avoid both these problems (ie hardware access requires the software to be operating in a 'privileged' mode and for any violations of memory constraints to be 'silently' ignored and some hack value returned) - but this is by no means typical of systems in use. As you also note, there are other ways in which incorrect read accesses can lead to subsequent memory access violations (reading an array of pointers); unfortunately once you open this pandora's box it can quickly become very difficult to manage the risks. You mention retaining checks on arrays of pointers; what if the array returns a loop termination condition - the program may go into an infinite loop. If the value is used in a jump table (a common implementation of 'case' statements) the program control flow becomes unpredicatable (it could even execute data - so you may end up running a different program). Once you allow data to become erroneous (through whatever means) it quickly undermines many of the premises on which the whole behaviour of the program is built. Ada compilers in particular are allowed to assume that variables conform to their type declaration and optimise accordingly. Ada's premise is that a constraint error would be raised if you tried to give a variable a value outside that range. If you suppress that check (anywhere) you risk creating an erroneous program. All that said the original idea is not entirely without merit, the issues of performance that Markus mentions are all too real in many systems. Generally though I think it is preferable to prove that you do not have an erroneous program and with Ada's strong typing it is possible to write code that is easier for a compiler to optimise, allowing it to remove unnecessary run-time checks where the rules of the language make it clear that an error can not occur. As Peter Amey notes regarding the work done with SPARK - proving the absence of errors, rather than living with them and assuming their scope for causing damage is limited, is generally more sound. -- Stuart Palin Consultant Engineer Flight Systems Division (Rochester) GEC-Marconi Avionics Ltd