From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,66752102482bbdca X-Google-Attributes: gid103376,public From: "Robert I. Eachus" Subject: Re: Required Metrics Date: 2000/05/07 Message-ID: <3914F1DC.A5EE1751@earthlink.net> X-Deja-AN: 620173696 Content-Transfer-Encoding: 7bit References: <5DDO4.2237$wb7.194854@news.flash.net> <8ek4ea$5ta$1@nnrp1.deja.com> <8es65n$5hn$1@nnrp1.deja.com> <%MoQ4.7915$wb7.556168@news.flash.net> <8eulom$u8m$1@nnrp1.deja.com> X-Accept-Language: en,pdf Content-Type: text/plain; charset=us-ascii X-Complaints-To: abuse@earthlink.net X-Trace: newsread1.prod.itd.earthlink.net 957673928 63.24.55.237 (Sat, 06 May 2000 21:32:08 PDT) Organization: The MITRE Corporation MIME-Version: 1.0 NNTP-Posting-Date: Sat, 06 May 2000 21:32:08 PDT Newsgroups: comp.lang.ada Date: 2000-05-07T00:00:00+00:00 List-Id: Robert Dewar wrote: > Well, "shall" is an impressive word, it sure *sounds* like > a requirement. Unfortunately, what on earth does it mean to > "provide the following information". Ooops, totally undefined. > And since the information for ALL sections is equivalent to > information provided in the object file, you can argue from > a formal point of view that the object file satisfies the > requirement. Let me see if I can spread a little oil on the water here. Ken sees the shalls, and is used to saying "Aha, a requirement. I must find all such requirements--explicit, implicit, and derived--during requirements analysis, allocate them, and plan to test them." I've spent time in this environment as well, and so had a lot of those developing and approving the Ada standard. It somehow seems wrong to use shall in a non-testable 'requirement.' The requirement should either be made testable, or not treated as a requirement. On the other hand, everyone involved in the process was aware of what had happened in Ada 83 with most of Chapter 13. Some vendors ignored the parts that weren't tested, and other vendors since the requirements there were not tested put implementation of those features at low priority. (If a customer came alone with a need and sufficient cash, they move up the list. But the users were not particularly happy with that either.) So why were there no meaningful Chapter 13 tests? Not because there were not real requirements there, but because the real requirements were not testable, and the parts that were formally testable were such that tests were counterproductive. For example, how do you design a portable test for the bit patterns used to represent enumeration types? It is much more difficult than it seems, since S'Pos returns the position number, not the representation. Of course, a new attribute could be required, that would make the enumeration representation clause testable, but that is adding a significant burden. (Note that RM 95 says in 13.4(11) that "Unchecked_Conversion may be used to query the internal codes used for an enumeration type." But this is a note, and intentionally does not say how to do it or to place any requirements on the results of such conversions, so this is not even a derived requirement. You can, from real requirements derive some tests that are useful, but not necessarily as useful as you would expect due to byte-sex and bit ordering issues. see 13.5.3.) For Ada 95, there was a strong desire to make Chapter 13 more useful. But since the nature of Chapter 13 is to be very implementation dependent, and difficult to test, users, especially embedded system users, wanted a stronger "moral" commitment to features that they felt were necessary, even if they were untestable. Such moral commitments became Implementation Advice and Documentation Requirements. But note that in most Documentation Requirements which require a measurement, the reason for adding the requirement was not to ensure that these measurements appear in some manual that may bear little or no relation to the intended hardware configuration. The idea was that for those particular quantities, it was important that it be possible to determine what they are. A computer program that performed the measurements is more than adequate documentation, as long as the underlying moral commitment is met. Since for most of the numeric documentation requirements, those programs exist, the requirement is trivially met IF THE QUANTITY IS MEASURABLE. So in most (all?) cases, the existance of an ACES report meets the documentation requirements as long as the compiler does not engage in bizarre behavior. (For example, if redezvous take an extra three milliseconds on alternate Tuesdays, that needs to be in the documentation.) One last word on testable requirements. Is a requirement that an error will be raised if the sun rises in the West testable? Not really. The requirement is clear, but it is not possible to create the environment necessary to perform the test. What about a requirement to document the number of planets in the solar system? Somewhat testable, but what do you do if the documentation says 10? Is the documentation wrong, or is there a planet you don't know about? This is why many of the documentation requirements are untestable. For example, the intent of D.8(10) is clear: "An upper bound on the execution time, in processor clock cycles, of a delay_relative_statement whose requested value of the delay expression is less than or equal to zero." If the documentation says the upper limit is one million clocks is it useful? Probably not. Testable? Not likely. It is possible that a test program would require more than a million clock cycles for some input, and by luck your testing finds that case. But more more likely is that you have a meaningless test that can only fail to falsify the documentation. If the documentation specified 50 clocks, the test is much more interesting, but from a formal point of view nothing has changed. > Now of course from a pragmatic point of view, we want the > information in a much more usable form, and GNAT provides > a lot of *non-required* capabilities in this area. For > example... And this is where Robert Dewar the implementor, can and will give much more acceptable answers than Robert Dewar the langauge lawyer. Real users don't want useless volumes of meaningless measurements, they want answers to the real questions that lay behind the Documentation Requirements. There shouldn't be--and there is not--a requirement that the documentation must be in 10 point Sans-Serif font on 8 1/2" by 11" paper with one inch margins. The real requirement is that the information should be available to the user when needed and in a form that applies to the target system. So the real checkoff for any of the documentation requirements is not, where is the piece of paper? It is the ability of the user to obtain this information when needed. If ACT chooses to provide this for their validated compilers in the form of good customer support, great. If some other vendor chooses to provide a program which when compiled and run prints out the documentation, that's good too. And maybe some vendor chooses to provide a set of formulas which can be used to get the numbers. That is also acceptable, even if it is not very convenient. So there is no conspiracy here by compiler vendors or language lawyers to dilute or subvert the meaning of the Reference Manual. The ARG has enough work to do dealing with other issues. If the issue can be resolved by writing a page of code, then believe me, that is much easier than creating an AI. The compromise, if it can be called that, is found in RM 1.1.3(19): "The implementation may choose to document implementation-defined behavior either by documenting what happens in general, or by providing some mechanism for the user to determine what happens in a particular case." ANY documentation requirement can theoretically be satisfied by providing a program which does the measurement. Of course, not all documentation requirements can actually be satisfied that way. It is left up to the implementor to choose which Annex M issues have to be dealt with in the documentation and which are better satisfied using a tool or program. It is possible that for any particular implementation requirement some compilers will not be able to provide meaningful documentation, while some others cannot provide a test program. And that is another reason why this is a can of worms from a testing standpoint. The determination of which Annex M issues can be tested is something that cannot be done without looking at a particular compiler in depth, and the answers will only be appropriate to that compiler. Again, from a user's point of view, all this nit-picking is irrelevant. A user who needs, say, a compiler which never takes more than 1 millisecond for a clock interrupt is not going to be satisfied with some random document. He or she is going to write those requirements in the purchasing contract. Any such requirements in the contract will not specify that this or that feature be documented, but the actual required performance.