From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,113cbde0422b98e8 X-Google-Attributes: gid103376,public From: stt@houdini.camb.inmet.com (Tucker Taft) Subject: Re: Why no constraint error? Date: 1997/03/22 Message-ID: X-Deja-AN: 227527646 Sender: news@inmet.camb.inmet.com (USENET news) X-Nntp-Posting-Host: houdini.camb.inmet.com References: Organization: Intermetrics, Inc. Newsgroups: comp.lang.ada Date: 1997-03-22T00:00:00+00:00 List-Id: Robert A Duff (bobduff@world.std.com) wrote: : In article <5gs81q$114r@prime.imagin.net>, : Samuel Mize wrote: : >In article <5gs20s$2g11@prime.imagin.net>, : >Samuel Mize wrote: : >>The question is, why doesn't this code raise an exception when run : >>under GNAT? : >> : >> pragma Normalize_Scalars; : >> with Ada.Text_Io; : >> procedure Test_Subrange_Checks is : >> type T_Source is new Integer range 7 .. 10; : >> type T_Target is new Integer range 7 .. 10; -- identical ranges : >> : >> Source: T_Source; -- initialized out of range by Normalize_Scalars : >> Target: T_Target := 10; : >> begin : >> Target := T_Target (Source); -- no range check occurs!!!!!!!!!! : >> Ada.Text_Io.Put_Line (T_Target'Image (Target)); : >> end Test_Subrange_Checks; : > : > : >It turns out GNAT is right. (No big surprise.) : No, I believe GNAT is wrong. Well Bob, I believe GNAT is technically right in this one, though I think it may be pessimizing the code overall if it follows the approach implied by the above (see below for more discussion). One thing to keep in mind is that "out of the box" GNAT suppresses certain run-time checks (not my favorite feature of GNAT, I might say ;-). I trust this was compiled with all checks *on*... : >I think I've found it. 13.9.1(9) defines invalid representations; : >it also states "The rules of the language outside this subclause : >assume that all objects have valid representations." : > : >So, the compiler can omit the range checks by assuming that : >the data is valid. : No, that's not what I meant when I wrote 13.9.1(9). That is, the : compiler cannot assume data is valid. The *rules* in the rest of the RM : assume valid data, but that assumption is wrong, and 13.9.1 tries to : fill in the resulting logical holes. : In your example, Source should be initialized to some invalid value of : type T_Source, such as 11. 13.9.1(10) applies, and 11 is greater than : 10 (despite the fact that 11 is invalid), and so should fail the range : check. I believe there is no permission to omit that check. I believe there is permission to do this, though I think it is unwise to do so (see below). : Unfortunately, 13.9.1(11) is a loophole you could drive a truck through. : The assumption is that compilers will be reasonable. There's some AARM : discussion on this point. But this para is irrelevant to your example. : I'm not entirely sure if the above analysis is correct. Tucker, if : you're listening, can you comment? : >Note that, in a similar case, an array reference can point to : >any arbitrary memory location (uninitialized scalar used as : >an array index). : It was definitely a goal of Ada 9X that an uninitialized variable cannot : cause arbitrary memory locations to be overwritten (e.g. when you say : "A(I) := ..." and I is uninitialized. This is true whether or not you : say Normalize_Scalars. (In Ada 83, such a case is erroneous, and so : could trash memory or anything else bad.) : >While I understand this from an efficiency point of view, I'd : >like it to be different. Whine, whine. One project I'm on is : >auto-converting a huge base of occam code to Ada, and a number : >of uninitialized integers are biting us in the tail. : >(don't start, we DON'T HAVE occam on our target machine) : > : >However, I now see how this optimization is allowed by the : >formal rules, so I'll live with it. : > : >One useful compiler option, it seems to me, would tell the : >compiler to NOT omit such checks in such cases -- to do : >explicitly all range checks. I'd love to be able to test : >some of this auto-generated garbage under such an option. : I'd like to have an option to check all uninit vars. Here is my explanation of the issue: For checks whose failure might result in real "damage", the compiler must be very careful about what it assumes. In particular, it must not assume that variables are initialized unless it can prove they are. This means that for certain checks, the compiler should not "believe" in the declared range of the variable. In this particular case, the propagation of an uninitialized value from one variable to another is relatively benign, since no memory is being trashed by this propagation. The real question is what happens when "Target" is used as an index into an array. Does GNAT remember that Target might become deinitialized by the assignment, and hence do the check when Target is used as an index? Or does GNAT see that Target is initialized to a valid value, and presume that it never becomes deinitialized. In our AdaMagic front end, objects are identified by the compiler as either "reliable" or "unreliable." For reliable objects, we presume they are within their declared subtype, and we make sure that that presumption is not violated. In particular, we presume that explicitly initialized scalar variables are reliable, and we believe in their declared subtype, and we make sure we don't allow them to become deinitialized. Scalar variables that are not explicitly initialized are considered "unreliable," and we don't believe their declared subtype unless we can prove through other means that they are within it. So in the above test case, we would certainly do the check, because we are assigning from an unreliable variable to a reliable one. An alternative approach is to mark "Target" as unreliable after such an assignment, and factor that into future decisions. That requires some amount of data flow analysis in the front end, which is something which is more often deferred until an optimization phase. Also, marking Target as unreliable might be pessimizing the code, if there are a number of uses of Target which will require more checks due to its unreliability. A global optimizer can look around and decide where to put checks. Our front end adopts some simple rules (as exemplified above) that allows it to run essentially one pass. I'm curious what approach GNAT adopts, and whether the above would in fact reveal a bug if Target were used as an array index... : - Bob -Tucker Taft stt@inmet.com http://www.inmet.com/~stt/ Intermetrics, Inc. Burlington, MA USA