From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c7f5c70275787af8 X-Google-Attributes: gid103376,public From: Gautier Subject: Re: Ada vs Delphi? Date: 1999/08/14 Message-ID: <37B56801.165FAA4B@Maths.UniNe.CH> X-Deja-AN: 512714909 Content-Transfer-Encoding: 7bit References: <37ab421a.5414989@news.total.net> <37ab8bd1.0@news.pacifier.com> <37ae1fc8.653954@news.clara.net> <37AE980F.15E6A15C@maths.unine.ch> <37b12a3f.43564067@news.total.net> <37B27FE8.6E09569E@Maths.UniNe.CH> <37b54516.54588055@news.total.net> To: Andre Ratel Content-Type: text/plain; charset=us-ascii MIME-Version: 1.0 Newsgroups: comp.lang.ada Date: 1999-08-14T00:00:00+00:00 List-Id: > It depends. More often than not, I use integers as counters. > For example, let's consider an hypothetical situation in which > we have the following loop > <<< > for n1:= -128 to 127 > for n2:= -128 to 127 > begin > sum:= n1 + n2; > ... > end; {for} > >>> In the most cases, simply declare "sum" as integer and everything will be fine. NB: n1 & n2 are declared and semantically exist from begin of loop till end of loop, like in a true mathematical sum. If you need longer integers, you'll force their type for n1 in long_integer'(-128)..127 loop or -if you need to know how many bits- for n1 in integer_64'(-128)..127 loop In any case, every mistake will be catched either at compile time or at run-time (when range chacks are on). (...) > A safe language should be such that all evaluations are performed > using the largest set of numerals available (longint for integers, > extended for real numbers). Such an approach is safe but can be redhibitory slow. In fact it facilitates the job for writing small and fast compilers in assembler... With explicit type conversions and operators precisely defined for such or such type, there is a _safe_ and _fast_ solution. Let's see Borland's example: > {$N+} {using the numerical coprocessor} > > var > X, A, B, C: real; > begin > X:= (B + Sqrt(B*B - A*C))/A; > end; According to their doc, there will be 6 real-to-extended conversions and 1 extended-to-real: 8-(( . In addition the "real" type in TP is an old floating-point type not related to FPU: the 7 conversions are made "by hand" in TP' run-time library !! A quick disassembly gives: mov ax,[b] mov bx,[b+2] mov dx,[b+4] call frealext ;SYSTEM.TPU mov ax,[b] mov bx,[b+2] mov dx,[b+4] call frealext ;SYSTEM.TPU fmulp st(1),st Just for doing B*B . No wonder that TP is not a star in numerics... A Turbo Ada would have (in its Ada.Numerics.***) "+", "-", "*", "/", "Sqrt" functions for real and the same set for any other available floating-point type (FPU or not), with the most direct code (i.e. the fastest one) each time, all being absolutely safe... If you need a certain precision, you store the values in floating-point variables of the precision required. (...) [my example] > For (i1 + i2), the common type (integer) is used for the > evaluation and, since the sum is outside the range, we get > the wrong result -1536. This problem could be avoided if, > instead of using the common type, Turbo Pascal would use > longint. It would not be smart at all for 16-bit code - Borland wanted a not too slow code for integers - as 99% other contexts with (i1 + i2) keep in the range. In addition the problem would arise again with adding (2000000000+2000000000) : it means that everything should be computed with 64 bits (Delphi it seems); but then I come with an example with (2^63+2^63) and now everything should be computed with 128 bits (even slower)! You see that this is not the right approach. The problem here that TP has been written without precise idea about it, adding features from versions to versions... Their {$R+} (range check on) should have catched the overflow if they hadn't confused signed and unsigned for 16-bit "+". NB: I've discovered that bug by constructing my example! > {$R+} > > const > i1 = 32000; > i2 = 32000; > > var > n1, n2: shortint; {range: -128 .. 127} > N: longint; {range: -2147483648 .. 2147483647} > > begin > n1:= 120; > n2:= 120; > N:= n1 + n2; > Writeln(N); > > N:= i1 + i2; > Writeln(N); > N:= i1 - 5*i2; {here, I put something new} > Writeln(N); > end. > According to the first rule above, i1 and i2 should each be > casted as "integer with the smallest range that includes the > value of the integer constant". Since integer is the smallest > range including 32000, we should again get into trouble. > Surprisingly, when I run the above program, I get the correct > results: > 240 > 64000 > -128000 > So, it seems, something is still escaping me. The thing is that when TP sees an expression with numbers only, it computes it at compile-time. E.g. you'll find in assembler code that TP hardcoded N:= i1 - 5*i2; as N:= -128000 With typed constants you would find the bug again... NB: in Turbo Pascal, typed constants are in fact variables, you can change their value!!! > -=[Handling types with Ada]=---------- > how would you declare variables n1, n2, and sum in Ada? (...) > > In Ada you can handle ranges and bits independently: (...) > This is new to me. I've never seen something like this in the > languages I used before (Fortran 77, Pascal, C, IDL). I think > I'm beginning to get the idea but I would appreciate a small > example of this, just to see how this looks like in Ada. See at beginning... This Ada feature is at the same time the safest and the fastest for calculations (integer/flaoting-point/fixed-point). Even respectable Fortran compilers used for high-precision number-crunching do horrible mismashes with precision due to weak typing. First of all, it you compile a constant explicitly typed as double-precision with DEC Fortran or Lahey Fortran, both will cut with a smile all extra decimals as if it was single-precision !!! -- Gautier -------- http://members.xoom.com/gdemont/