From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,5752ba976f4dad11 X-Google-Attributes: gid103376,public From: jschafer1@iquest.net Subject: Re: GNAT 3.01 Source For OS/2 Date: 1996/05/01 Message-ID: X-Deja-AN: 152805036 sender: news@iquest.net (News Admin) x-nntp-posting-host: ind-000-236-33.iquest.net references: organization: IQuest Internet, Inc. reply-to: jschafer@iquest.net newsgroups: comp.lang.ada Date: 1996-05-01T00:00:00+00:00 List-Id: On: 27 Apl 1996 11:53:09 -400 dewar@cs.nyu.edu (Robert Dewar) wrote: > Well that makes it clear that something is very strange with what you > are doing. Of course all 3.03 releases are built with the 3.03 sources, > and of course they all compile their own libraries. Probably you are > just building in what for us is a non-standard way (for example it > is possible that you use different switches -- your example with the > Natural and -1 is clearly such an example, since the library is designe > to be compiled with checks off. Of course this is a (minor, not affecting > functionality) bug, which should be fixed. If you want to send us details, > we will be happy to look at it. O.K. You asked for it. First of all the return a value out of range problem. This problem is in the compiler itself. Take a look at the Table package. The generic parameters ask for an index range for the table and an low bounds for the array. Inside the package this type is ignored and an integer value is used to keep track of the low bounds and number of elements currently used in the table. When the package is elaborated the internal variable "Min" is initialized to Int (Table_Low_Bound). When a table gets initialized, the "Last_Val" variable gets initialized to "Min - 1". Now for the problem. Concider what happens when the "Last" operation is called immediately after a table is initialized. It returns the value "Table_Index_Type (Last_Val)". But since Min is equal to Table_Low_Bound and Last_Val is one less, by DEFINITION, the call to the "Last" operation MUST FAIL !!!!! Since I have defined the problem, I better tell you where it is a killer. Now look at the instantiation of the Table generic in the errout package named "Warnings". Because it's range is Natural and it's low bound is 0 this is a potential problem. Where it becomes a REAL problem is at the end of reading the source code for a file, when any warnings are turned on again. Take a look at the call to Set_Warnings_Mode_On in the procedure Par.Ch10.P_Compilation_Unit. "Set_Warnings_Mode_On" makes a call to "Warnings.Last" !!!! So if there has been no warning pragmas encountered the table has been initialized and never added to so the call to Set_Warnings_Mode_On kills the compiler ITSELF !!!! This is my FIRST example of why I say the 3.03 executables you are putting out DON't match the 3.03 source. If they did and checking was turned on when the compiler was compiled, EVERY single 3.03 executable on cs.nyu.edu would not work !!!! Another possiblity is that you patched the code you used to make the 3.03 executables. If that is true I am correct without argument. The only other possibility is that ACT compiles the compiler with checks turned off when it produces its executables. The problem here has to do with the obvious implication that ACT is in the habit of TESTING thier compilers with checks TURNED OFF. The problem here once again goes back to a quality assurance issue. If ACT isn't willing to use the basic features of the language to help enforce a stable level of quality controls in its maintenance of GNAT, How can you expect me to believe anything except: "GNAT is Free and Still Costs Too Much" ??? > Regardng the actual subtypes, you are treadng in a very difficult area of > the compiler. IN general the additional actual subtypes are ESSENTIAL to > fix a number of critical bugs -- so your changes will cause a lot of > serious regressions. It is true that certain programs can generate > more code as a result. For us more code is less of a problem than wrong > code! In fact in 3.04, we have largely, but not completely eliminated > this impact, while retaining the proper functionality, which was not > easy to do. O.K. once again you talk in general terms. WHAT bugs where fixed. Where is the actual subtype information "ESSENTIAL". So why is it that whenever there is a selected component that selects a field from a variant, we MUST generate an complete subtype declaration ???? To make matters worse, the semantic analysis of these quote "NESSISARY" subtype declarations causes the entity list for all the fields of the original record type to be duplicated. Since a single entity takes up 4 times the room of a normal node, a record with say 32 fields in it will require 128 nodes worth of memory to hold this field list in the quote "expanded" subtype declaration. Now go back to when this happens !!! At Every SELECTED COMPONENT !!!! Potentially a VAST number of location even in a small piece of code. In my simple test case, there was a FIVE times increase in the number of nodes created by the compiler, I didn't say anything about how big the generated object code is. The problem here is that a unit, of about 3600 lines, that compiled fine on the 3.01 compiler with 15 Mb of memory being used. On the 3.03 compiler the same piece of code REQUIRES 75 Mb of memory. On my little 20 Mb of physical RAM 468/66 once the compiler allocates more than about 50 Mb of memory the machine begins to thrash and the job runs and runs and runs. In fact on the 3.01 compiler the unit compiles in about 2-3 mins. With the 3.03 compiler, it takes over 36 HOURS !!!!! Clearly this is NOT acceptable. Strangely enough, when I simply stop generating those subtype declarations, everything works fine, and it takes 2-3 mins again. I find it hard to swallow that the "ENHANCEMENT" of adding these subtype declarations which forces me to go BUY memory to max out my older machine at 32 Mb. But then that will most likely NOT be enough memory so I will be forced to go out and BUY a new Pentium or at least a newer motherboard and even MORE memory. The thing I find most rediculous about this problem is how the current 3.03 design produces literally thousands of DUPLICATE subtype declarations. Concider a simple situation where we are using a case statement to comb thru one of serveral variants of a record. The case expression would be a selected component that names the discriminant of the record. This selected component generates a new subtype declaration. After that in the arms of the case statement, one of the most inevitable things to do is to use more selected components that work with the fields in the variants of the record. However each of these selected components ALSO generates a subtype declaration. But each of these is an EXACT DUPLICATE of the one generated in the case expression !!!!! This is not only braindead, it is wastfull and very sloppy coding !!!! > It is certainly true that there were some regressions with 3.03. We know > of no way to guarantee that there will be no regressions, but we had > very few reproted, and for most people, the large number of bug fixes > and extra functoinality from 3.01 to 3.03 was desirable! This brings up problem 3 the derivation of tagged types. Take a look at the package sem_ch3.adb in the procedure Build_Derived_Tagged_Type. Pay perticular attention to the "Present" check on line 7541. The original code set the discriminant constraint on the derived type equal to the one on the parent type, but ONLY if the parent type HAD a discriminant constrant. In cases where the parent did NOT have a discriminant constraint, the derived type's discriminant constrant was not initialized (i.e. Elist6 remains initialized with a value of 0 not a value of 100,000,000) Later when this field gets referenced as an enitity list the compiler croaks with a Constraint_Error. As far as regessions try the package Ada.Finalization. It has a derived type in it's private part named "Controlled". It is derived from the Root_Controlled type of System.Finalization_Root, which in turn is derived from the abstract tagged type Empty_Root_Controlled. With this bug, since the original type Empty_Root_Controlled has no discriminants it's discriminant_constraint field (elist6) gets a value of No_Elist. However when it is derived into Root_Controlled, the discriminant_constraint field is NOT initialized and so keeps the illegal value of 0. Then when Ada.Finalization.Controlled is derived from Root_Controlled this SAME check for "Present" will raise a Constraint_Error. So we have ANOTHER case where at a minimum, ACT tested 3.03 with the executables for 3.03 compiled with at least constraint checking turned off. Since you must explicitly tell the compiler to not generate the checks, ACT must have make a conscious decision to turn off checking when they compiled the code they tested. If the excuse for doing this is "So we could test the exact executable we deliver", this begs the question, "Still, why is it that ACT didn't even compile the compiler with checks on at least ONCE" ???? The first problem with the instantiation of the Table package will stop the building of the compiler when you attempt to build the stage 2 version of the compiler. Once you fix that problem, you can complete the building of the stage 2 compiler, but now you run into a problem with the stage 2 compiler not being able to compile GNAT's own libraries. There could not be any more obvious piece of code to use for regression testing that the source for GNAT and it's own library. If just ONCE somebody had compiled the compiler with checking turned on, these 2 problems would have been found before 3.03 source went out. > I think you would do better to report the bugs you find in a calm and > collected manner, rather than flail around trying to fix them yourself > when you don't fully understand the semantic implications of what you > are doing. The "ideots" [sic] at ACT might just possibly understand > some of the issues better than you do! I am NOT making bug reports !!!! I am making some general observations known about things I have found in the GNAT source code. I'm sorry you don't like my politically incorrect presentation. Would "Real World Ada Experience Challanged" be better ??? Speaking of which, take a look at the Init routine in the Table package. the code says: Max := Min - Table_Initial - 1; Length := Max - Min + 1; Why Not ??? Max := Min - Table_Initial - 1; Length := Table_Initial; Are you just EXTREEMLY confident in the GCC code generators or was this a typical mistake made by a coder new to Ada ???? Look at the related first bug. It was caused by a similar newbe misunderstanding that the 'Last of a null slice can SOMETIMES not be in the range of the index subtype. Dealing with 'First, 'Last, 'Length with arrays and understanding the relationships between them is Ada 101 stuff. I expect these kinds of problems to typically be found in code produced by coders with less that 3 years of real world Ada programming experience. I would HOPE I could expect some higher level of experience in a compiler maintainer/vendor !!!! > Regarding difs, we could certainly do this. They are large, probably > 3.02 to 3.03 would be 10,000 lines of difs, many of them coming simply > from reformatting, comment changes etc. as well as technical changes. This would be GREAT !!!! At least we could then externally track what ACT is doing to the compiler. It would also be helpfull if you would release versions of source code EVEN if you don't provide executables. I never saw any 3.02 executables or source, but you talk like it existed !!!! Go out to MIT's GNU FTP site. Take a look at how they handle GCC. Why not the same thing for the source of GNAT ???? 2 or 3 versions with diff's in between. WHO cares if you provide executables. From what I have seen I will need to build the compiler myself ANYWAY just so I can know EXACTLY what I have in front of me. It sure wouldn't hurt if ACT would provide some REAL documentation on the compiler like: 1. A general overview of the workings of the compiler GCC has great documentation included with the source. 2. Document what to use and how to regenerate atree, sinfo, einfo etc. GCC at least includes info on using bison to produce compiler code. 3. When your README says "rename this file to" at LEAST have the file. GCC is rather complete. > You actually seem to understand a lot about the tehnology of GNAT. TOo > bad you can't operate in a more constructive mode, like many of the > other volunteers who help improve the technology. In particular, it > really would be helpful to have detailed bug reports, rather than > vague complaints. If you think you have a fix, by all means send it > along to us. Many people send us proposed fixes. Often they are not > quite right, but they are very helpful in pointing to the right fix. > Sometimes they are indeed right. The difficulty is that many things > interact subtly. Especially if you are not thoroughly familiar with > GNAT, and with Ada 95 semantics, then what seems like a simple fix > can in fact turn out to have subtle interactoins, as is the case with > the actual subtype stuff. Once again, I am not tring to help you fix your compiler. If I was doing that you would have seen this as a bug report thru normal channels. This is being posted on the Usenet to highlight some problems I was having with code from ACT. Since GNAT was always intended to be the core piece of a project to help spur on the growth of Ada, I felt that such basic problems with the GNAT code should be brought to the public's attention. Every person involved in Ada work should be concerned. Ada has suffered because of compiler vendors that have jacked the price of a decent Ada compiler up to the level where only Government institutions are able to really afford it. To make matters worse they only added features to the compiler when the government was willing to pay for it. As a result, very few organizations outside the government have adopted Ada. Kind of reminds me of Beta verse VHS. Beta was clearly a better system, but that additional cost of Bete wiped it out. Bought any Beta tapes lately:-) Or should I start working with C++ (yuck, curse, vomit, bite my tongue) :-) > By the way, ACT always provies full source releases with binary releases. > We do not provide FTP access direct to ACT for other than our customers, > but the releases are on many public FTP sites, and they are certainly > free to keep old sources around. It certainly seems like it would be a good > idea if one of these sites would keep old sources around. Disk space is > not infinite at NYU, so this is probably not the best choice, but it is > hard to believe that some of the other sites that keep copies of GNAT > around do not have the disk space to keep old copies of the sources. Back to the appropriateness of selling a validated compiler and maintaining a piece of freeware. An FTP site for "Customers Only" ??? Sounds like you have things there that didn't make it to the public sites. Like maybe the 3.02 source and the documentation ???? Since you haven't updated your homepage since 2/9/96 and it's all screwed up anyway I don't find it strange that you don't seem to understand how your code is mirrored around the world. When you delete a file from cs.nyu.edu, it get's deleted on EVERY OTHER MACHINE. That's why they call them mirrors !!!! If ACT's definition of how to distribute GNAT is to HOPE somebody out there keeps older copies because of lack of space, Then I am here to say: "When it comes to disk space and old copies of the GNAT source, the Emperor HAS No Clothes" !!!! Joe Schafer jschafer@iquest.net