From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9a586954b11ae008 X-Google-Attributes: gid103376,public From: bobduff@world.std.com (Robert A Duff) Subject: Re: Overflows (lisp fixnum-bignum conversion) Date: 1997/04/08 Message-ID: #1/1 X-Deja-AN: 231575005 References: <1997Apr2.202514.1843@nosc.mil> Organization: The World Public Access UNIX, Brookline, MA Newsgroups: comp.lang.ada Date: 1997-04-08T00:00:00+00:00 List-Id: In article , Robert Dewar wrote: >Robert Duff said > ><<> Analogy: Suppose you write a signed integer expression that overflows. >> In C, you get unpredictable results (exactly what Ada calls erroneous). >> In Ada, you get a run-time error. In Lisp, you get the right answer. >> It seems to me that, as far as preventing bugs goes, with respect to >> this particular issue, C is worst, Ada is better (because it detects the >> error), and Lisp is better still (because it prevents the error).>> > >I find this too facile. The trouble is that better means better according >to some metric. What are you interested in? > >More flexibility >More efficiency >More simplicity >... I thought I made that clear: I said, "as far as preventing bugs goes". I meant logic errors in the program. In general, with respect to *this* metric, the worst possible semantics is "erroneous", better is to detect the error (either at compile time or run time), and best of all is to prevent the error. This is because it is easier to reason about the program, if you don't have to worry about weird boundary conditions involving numbers like 2**31-1, which (usually) have nothing to do with the problem, and everything to do with the underlying hardware. I am well aware that other concerns (mainly efficiency) might produce a different rating -- in particular, for efficiency, "erroneous" is best, detection second best, and prevention (i.e. Lisp bignums) worst. Remember that this was all in response to somebody who claimed that declaring a certain feature of Ada 83 "erroneous" somehow made this feature of Ada 83 safer, than corresponding features in other languages that require the feature to "work" (such as Ada 95). I disagree with that. I also made it clear I was talking about overflows of signed integer numbers -- that is, on arithmetic. For this particular feature, and this particular metric (bug prevention), I stand by my statement that C is worst, Ada better, and Lisp better still. Note that I was *not* talking about array bounds checking, nor about range checking. Just overflows. Aside: It would be possible to design a language that had range checking as in Ada, but prevented overflows as in Lisp. In such a language, I could write: type T is range 0 .. N; X: T := ...; Y: T := ...; Average: T := (X + Y) / 2; and be guaranteed that Average will contain the "right" answer. In Ada, if N is 10, it will probably not overflow. If N is 2**31, it will overflow on some compilers in some cases, and if N is 2**32, some compilers won't even accept it. And don't beat me up about efficiency: I understand that there would be an efficiency cost to such functionality, in some cases. (OTOH, the inefficiency could be pretty much avoided by writing "(T'(X) + T'(Y)) / 2", and make sure the bounds of T match the hardware bounds.) Maybe I could also write: type T is range 0..Infinite; or type T is range 0..<>; or some such thing, meaning the upper bound is arbitrarily large. Maybe I could also write: type T is range 0..; - Bob