From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public From: donh@syd.csa.com.au (Don Harrison) Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/07/25 Message-ID: X-Deja-AN: 258677071 Sender: news@syd.csa.com.au X-Nntp-Posting-Host: dev50 References: <33D6A2EE.325@flash.net> Organization: CSC Australia, Sydney Reply-To: donh@syd.csa.com.au Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-07-25T00:00:00+00:00 List-Id: Ken Garlington wrote: :Don Harrison wrote: :> :> Ken Garlington wrote: :> :> :Don Harrison wrote: :> :> :> :> First, part of the reason it looks complicated is that it is only emulating :> :> (in Ada83) the real thing. As I've already said, I wouldn't implement assertions :> :> this way if I had the proper tools to work with. If you express it in Eiffel, :> :> it becomes simpler. :> : :> :OK - could you post the Ada equivalent? :> :> Sure. Assuming the availability of a postcondition clause, :> :> type Input_State_Type is (.., Menu, ..); :> :> Input_State : Input_State_Type; :> No_of_Keys : Natural :> Menu_Option_Selected : Boolean; :> Submenu_Option_Selected : Boolean; :> :> ... :> :> function in_menu_state return Boolean is :> begin :> return (Input_State = Menu and then No_of_Keys = 0 and then :> not Menu_Option_Selected and then not Submenu_Option_Selected); :> end; :> :> procedure Interpret_CLR_Key is :> begin :> ... :> ensure in_menu_state -- General assertion :> end; :> :> Note that there is no precondition on Interpret_CLR_Key in accordance with the :> fact that the Console Operator should be able to press any key at any time :> without having the software fall over. Using assertions to validate input :> would be a misuse of DBC. : :Compare this to the Ada code you presented earlier. Why is this :significantly :simpler or easier to read? Due to abstraction. :> :> Second, the compound condition represents a single higher level condition :> :> which can be declared as a function. :> : :> :But doesn't that just postpone the complexity? Making the assertion :> :nearest :> :the point the reviewer is reading less clear, and requiring a search :> :through :> :other files to discover the meaning of the higher-level condition? :> :> Yes, but this is no different from using encapsulation for abstraction anywhere :> else to simplify. You aren't going to replace every call in a procedure :> with the actual code of the called procedures (a compiler may, but a human :> shouldn't). Neither should you do this for assertions. You don't suddenly forget :> about abstraction just because you're writing an assertion. : :True, but remember that we are asking a human reader to not just :understand what is required from a user's point of view (where :abstraction :permits unncessary details to be hidden), but to verify if the :software will function correctly or not. Doesn't this imply that the :evaluator will likely have to understand more than that required by a :user of :the interface? It means they have to have to find out more but the effort involved isn't excessive, IMO. :> A prerequisite for (any kind of) abstraction to be effective, of course, is :> meaningful identifiers. If your identifier names accurately reflect what you're :> trying to model, a high-level assertion immediately makes sense and you only :> look further if you really need to know the details. This would be the case, :> for example, if investigating the cause of an assertion violation or conducting :> a manual code review. : :Exactly my point. Meyer claims that such manual reviews are the primary :benefit :of assertions. However, abstration does not necessarily assist such a :review; in :fact, it may complicate it. Well, it does add a little more work but I've never heard anyone complain about it. :> Otherwise, the higher-level abstraction is not only :> sufficient but desirable (unless you want to be needlessly swamped by compexity). :> :> Using meaningful identifiers sure makes a big difference to intelligibility and, :> IMO, is a skill that can be acquired by any reasonably literate developer :> who is suitably encouraged (read "forced" to use them). : :However, if the intent is to discover errors made by the developer, to :what :extent can the reviewer trust those "meaningful identifiers"? If you know that Joe Bloggs, the author, is fond of using non-indicative identifiers, you know to check the details. Of course, *real* software engineers aren't like that, are they?.. Realistically, any software shop that *does* use Eiffel is likely to encourage/enforce sensible naming so I don't see it as an Earth-shattering issue. :> :For that matter, if this is the correct method, why not do it in Ada? :> :> Purely historical reasons. If the code wasn't frozen at the moment, I probably :> would. : :Then this doesn't seem to be an advantage of Eiffel over Ada, as your :earlier comment indicated. Perhaps we've miscommunicated here. The principle of abstracting assertions using functions is language-neutral - unlike a number of other assertion-related issues. :> :> I don't know what ADARTS is but it's one thing to be told you have to think :> :> of all assumptions and another to demonstrate you have as evidenced by coded :> :> assertions. :> : :> :Again, from a documentation standpoint, why is writing code better than :> :documenting the assertions in a form the reviewer understands? :> :> Consider the aversion of developers to documenting code. It's viewed by them :> as a necessary burden which doesn't actually make the software work so there's :> little motivation to do it. Executable assertions, OTOH, actually contribute :> something (runtime checks) to a running program. Therefore, they have some :> material value in the mind of a developer and their more likely to use them. : :I'm not sure I agree that well-trained and professional developers feel :this :way, but let's say they do. IME, the desire to minimise the amount of work we have to do is shared by everyone however well-trained and professional they happen to be. However,.. :Wouldn't this imply that they will be biased :toward adding code that helps them get their code correct for the :current :use of their code (i.e. that part that provides direct value to them) :and :much less attention to addressing issues that don't provide value to :them :(e.g., handling errors due to improper reuse by "some other guy")? Not if they take a longer-term, reuse-centred view. That sort of attitude would be encouraged in Eiffel shops. Those who weren't interested in practising it would be more likely working somewhere else. :> The by-product is that the code gets documented for free and the developers :> actually *want* to do it. : :It also seems that the documentation value of the assertions will be :lessened :(less likely to use meaningful identifiers vs. ones that "work good :enough"). :The line between codes that don't like to write documentation vs. coders :that :don't like to write self-documenting code for others to read appears to :me :to be easy to cross... Yes, it is easy to cross, especially after a few nightmare experiences as a maintenance programmer. :) :> :> :I also challenge whether or not you code all of the assertions you can :> :> :envision. :> :> :Look at your example from earlier. There's absolutely no other :> :> :assertions you :> :> :could consider? :> :> :> :> There may be others but so what? The existence of any I may not have thought :> :> of doesn't negate the effectiveness of those I *have* thought of. :> : :> :But how do you draw the line between the assumptions you state and the :> :assumptions you leave silent? :> :> Sorry, not sure what you're driving at. : :Every error condition that might occur (e.g., adding extra code that :makes an extraneous assignment), but is not checked by an assertion, :would appear to be an undocumented assumption. Every undocumented :assumption, :per Meyer, represents an incomplete description of the interface. True. :How do you decide which assumptions not to document? I can only speak for myself, but I document as many as I can think of without going overboard (bearing in mind I'm developing realtime software). Because of my domain, I'm probably more selective than a non-realtime programmer would be. :> :And what does that do to the oft-said statement that Eiffel programmers :> :put in assertions for unlikely events? :> :> That's what all (or at least most) assertions are. They're catering for the :> unlikely event that the developer has made a logic error. Most of the time :> we write code with correct logic but occasionally we make mistakes. These :> mistakes can often be picked up by an assertion. : :So, how are the likely events detected? By this definition, the likely events correspond to correct logic so we're not interested in them. :And, if the same person who made the logic error writes the assertions, :why :is it reasonable to expect that the assertion detects the error? Won't :the person simply make the same mistake in logic, particularly for such :things as missed requirements? Perhaps sometimes, but often not. Usually, the error in logic is in a different piece of code to the assertion. (consider my ATC example). Even if the same person writes both the assertion and the code that uses it, there may be an intervening time period, giving the developer a chance to forget their original reasoning. Worse still, in a team-oriented setup, it usually *isn't* the same person and their understanding of how the system hangs together may differ. In my own experience, a large number of such differences can be exposed by assertions. :> :> :How about the possibility that calling one of the :> :> :functions :> :> :in the assertion causes NO_OF_KEYS to become non-zero, just for fun? :> :> :> :> As I don't write functions with side effects, I'm not sure why I would want to :> :> do that!.. :> : :> :More correctly stated, you don't _intend_ to write assumptions with side :> :effects, so you _assume_ there's no need to write an assertion to :> :confirm :> :that you, in fact, met your intent. :> :> You mean meta-assertions? Obviously, Your question in tongue-in-cheek but :> does highlight the fact that there is practical limit beyond which you don't :> want to go and meta-assertions are well beyond that limit. Most people want to :> go home at the end of the day. : :I don't know why this is a "meta-assertion." Extraneous code is a :category of error, just like missing code. You were talking about assertions with side-effects and seemed to be suggesting a need for assertions about assertions - which would be meta-assertions. These would be overkill, IMO. Getting back to your question, it can be verified statically (using a tool, for example) that an assertion (transitively) performs no assignments. So, a language environment with such a tool can verify for you that all your assertions are free of side-effects. :Given that "most people want to go home at the end of the day," my point :stands. Best have a tool do it for you, then, instead of doing it manually. :Which assumptions do you document? Those you think of. :Which do you remain undocumented? Those you don't (or choose to ignore for whatever reason). :And what theory drives the decision? Does DBC say, "this type of coding error is :more likely than the other, so protect against it first?" In theory, you try to document as many as you can think of without discrimination. However, I guess you can be more selective in practice. Personally, I try to anticipate the most likely causes of error and document those possibilities in assertions. Maybe others would like to offer their perpective.. :> :How do you catch software design faults in your code if you assume you :> :made :> :no mistakes in your code? :> :> Don't follow. In writing assertions, you assume you *will* make design mistakes :> in the code proper. : :Except that you deliberately exclude classes of faults, apparently, :without any statement in the code that you did so. Not intentionally, unless it's expedient for some reason - eg. for timing. :> If you're talking about design faults in assertions, I've :> already answered you previously - the code proper acts as a built-in assertion :> validation mechanism. : :Think again, particularly with respect to missing code. Maybe we're misconnecting here. I'm saying if you *do* code an assertion and you happen to miscode it or your initial assuption was incorrect, you will find out because you'll get an assertion violation under legitimate, rather ` than erroneous circumstances. You then *know* the assertion is incorrect, you find why, and fix it. In this way, the code proper acts as an assertion validation mechanism. Obviously, if you *don't* code an assertion (for whatever reason - didn't recognise the assumption, decided to leave it out), the code proper isn't going to validate it. Not sure whether I've covered your original question, though.. :> :> There already was one, but it would be better to also have a higher level :> :> one for initialisation. This would aid testing in that if you fed in the :> :> wrong profile, you could verify that the IRS will not work. It wouldn't rely :> :> on the more obscure low-level assertion that was actually violated. :> :> :> :> In IRS: :> :> :> :> set_operational_profile (profile: OPERATIONAL_PROFILE) is :> :> require profile_compatible (profile) :> :> do .. end :> : :> :This would require that the profile be stored in memory, wouldn't it? :> :The profile is a textual/graphical description of the behavior of the :> :environment. :> :> No, it simply requires that the thing you want to make assertions about can :> be abstracted in some way. This implies both attributes (data) and operations :> on those attributes. If you can abstract a profile using a class (which I'm :> sure you could), then you can make assertions about it's compatibility. : :Don't assertions involving the enviroment require data about the :environment? Ultimately, in some way. In terms of textual/graphical descriptions, if you can capture that information in a bunch of abstractions and capture a set of validity rules about it, you should then be able code an assertion which expresses its validity. This sort of assertion is quite high-level and potentially complex so would be a good candidate for an abstract assertion. :> :You might be able to express it as "The horizontal acceleration shall :> :not exceed xxx..." ... and in fact, it probably would make more sense :> :to express it exactly that way (in English!) so that it is clear at :> :the requirements level. :> :> You can do both. : :I'm not sure I agree with the claim that all comments can be written as :code! Nor would I but this one sounds like it could be. :) :> The other examples actually aren't applicable to the efficacy of DBC as :> they stem from errors in specification documents.. : :WUPS! Uh-oh. :) Why do I get the feeling this leading somewhere?.. :DBC doesn't protect against errors in specification documents? An interesting :statement! Not unless there was a related specification which serves indirectly to validate the erroneous one. This would effectively the same as if the developer made an error somewhere and an assertion picked it up. So, leaving it out doesn't necessarily mean all bets are off on picking it up. Otherwise, it's true. Alas, assertions won't do *everything* for you. :( :Given this, then it shouldn't be able to protect against misinterpretations :of specification documents by a developer, either, true? Again, not unless there was a related specification which serves indirectly to validate the one misinterpreted - otherwise true. :Or, for that matter, :misinterpretations of any higher-tier document (e.g. a design document)? Same deal. :Doesn't a requirement not read equate to a requirement not written? In practical terms, yes. (For some reason, this has a familiar ring to it :) Watch out, here comes the punchline.. :Finally, of all of the errors in code, assuming the use of existing techniques :(code reviews, unit test, etc.) how many are "code-only" errors vs. :misinterpretations of higher-tier documents? Probably greater than 50% Why, I wonder, do you ask?.. :) Don. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Don Harrison donh@syd.csa.com.au