comp.lang.ada
 help / color / mirror / Atom feed
* Use of DBC as "executable SRS": scaling problems
@ 1997-07-29  0:00 card
  1997-07-30  0:00 ` Nick Leaton
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: card @ 1997-07-29  0:00 UTC (permalink / raw)
  To: card


During the Ariane thread, some stated or implied that Eiffel's inclusion
of
preconditions and postconditions in method signatures and invariants in
class
specifications would have made the software more verifiable since the
software requirements would have effectively been coded right into the
class
definitions.

I do not think that using DBC as an "executable SRS" (SRS == Software
Requirements Specification) would have necessarily saved the Ariane V (I
am
not going to revisit the issues of testing & verification, etc). The
reasons I
think this way are not due to any inherent flaws in the DBC concept
itself,
but rather are due to problems of scale:

(1) In a large and complex system, the number of preconditions and post-
    conditions in a complex class hierarchy could get extremely large. I
was
    a member of a team which built a real-time database manager for the
Seawolf
    submarine. This software component had over 300 requirements, and we
had to
    verify almost all of them by test (there were a *very small* number
that
    were verified by inspection). These tests included timing tests that
were
    conducted on the actual hardware that was delivered to the Seawolf.

    In a large system like this, the verification and validation (V & V)
    process is difficult. It is hard to keep track of 300+ requirements,
    especially since those requirements frequently change during the
    development of the system! (It took a team of 3+ people about 2.5
years
    to build and verify the Seawolf's database manager). If the
requirements
    are expressed in code that is part of your method signatures, the task
    will not (IMHO) get any easier. In fact, it would likely get worse
since
    there would be additional assertions on low-level "utility" classes
    that are not part of the system's requirements.  These would also have
to
    be verified against another set of requirements.

    In the end, you would have a *very large* number of assertions to keep
    track of and update as the system's requirements changed. Making sure
    that the changes in your requirements baseline get into the assertions
    correctly would be just as difficult as making sure they get into the
    method bodies correctly, in my view.

    The chance for an error under these conditions is not lessened by DBC.

(2) If DBC were used as an "executable SRS", the number of assertions in
the
    system could very easily grow so large as to cause deadlines to be
missed.
    I suppose that most Eiffel programmers would write preconditions in
terms
    of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
    even simple comparisons like this could really add up if there are
enough
    of them. This point is only of concern for developers of large
real-time
    systems.

I am not saying here that I think DBC is inappropriate for large real-time
systems. On the contrary, I think that selective application of it might
be
of real benefit in some systems. What I am criticizing is the notion that
DBC
can be used to basically code the system requirements right into the
mission
software ("provably correct S/W courtesy of DBC"). This might be true for
some
very small systems, but on the large systems I have worked on I think an
approach like this would have failed miserably due to the two scaling
problems
mentioned above. I do not know how large the software system on Ariane V
was,
but I suspect it was too large for an approach like this.

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-29  0:00 card
@ 1997-07-30  0:00 ` Nick Leaton
  1997-07-31  0:00   ` Ken Garlington
  1997-07-31  0:00 ` W. Wesley Groleau x4923
  1997-07-31  0:00 ` Don Harrison
  2 siblings, 1 reply; 15+ messages in thread
From: Nick Leaton @ 1997-07-30  0:00 UTC (permalink / raw)



card@syr.lmco.com wrote:
> 
> During the Ariane thread, some stated or implied that Eiffel's inclusion
> of
> preconditions and postconditions in method signatures and invariants in
> class
> specifications would have made the software more verifiable since the
> software requirements would have effectively been coded right into the
> class
> definitions.
> 
> I do not think that using DBC as an "executable SRS" (SRS == Software
> Requirements Specification) would have necessarily saved the Ariane V (I
> am
> not going to revisit the issues of testing & verification, etc). The
> reasons I
> think this way are not due to any inherent flaws in the DBC concept
> itself,
> but rather are due to problems of scale:
> 
> (1) In a large and complex system, the number of preconditions and post-
>     conditions in a complex class hierarchy could get extremely large.

Divide and conquer still applies. One would hope that the component
pieces of your system will still small. Using assertions at this level
is an advantage. 

 I
> was
>     a member of a team which built a real-time database manager for the
> Seawolf
>     submarine. This software component had over 300 requirements, and we
> had to
>     verify almost all of them by test (there were a *very small* number
> that
>     were verified by inspection). These tests included timing tests that
> were
>     conducted on the actual hardware that was delivered to the Seawolf.
> 
>     In a large system like this, the verification and validation (V & V)
>     process is difficult. It is hard to keep track of 300+ requirements,
>     especially since those requirements frequently change during the
>     development of the system! (It took a team of 3+ people about 2.5
> years
>     to build and verify the Seawolf's database manager). If the
> requirements
>     are expressed in code that is part of your method signatures, the task
>     will not (IMHO) get any easier. In fact, it would likely get worse
> since
>     there would be additional assertions on low-level "utility" classes
>     that are not part of the system's requirements.  These would also have
> to
>     be verified against another set of requirements.

Assertions help with verification and validation. 

1) Violations get caught early in the design process. 
2) Writting assertions forces you to consider if the requirements
   are well formed. If you can't write an assertion, has the requirement
   been stated in an unambiguous way? Is it realisable? etc ...
3) There are requirements that cannot be coded as assertions. For
   example a bank may have a requirement that the software should
   try and prevent the bank becoming bankrupt. Perfectly normal 
   requirement. The software can help perform this task. However,
   stated as such, you cannot code it. You have a problem testing it.
4) Is having assertions on utility classes and verifying them, writing
   requirements for them a problem?


>     In the end, you would have a *very large* number of assertions to keep
>     track of and update as the system's requirements changed. Making sure
>     that the changes in your requirements baseline get into the assertions
>     correctly would be just as difficult as making sure they get into the
>     method bodies correctly, in my view.

Again, having the assertions in place, and having the verification and
validation performed automatically as you run makes it more likely that
you find the errors early rather than latter.

>     The chance for an error under these conditions is not lessened by DBC.

But the impact of it is, as you are more likely to catch it earlier.
Remember, you don't have to write the assertions. ie. Eiffel is giving
you an extra tool.

> (2) If DBC were used as an "executable SRS", the number of assertions in
> the
>     system could very easily grow so large as to cause deadlines to be
> missed.

Assertions are inherited. It is common for base classes to have lots of
assertions, deferred features (virtual in C++ parlance). Inherit classes
don't have many as they have inherited the assertions.

>     I suppose that most Eiffel programmers would write preconditions in
> terms
>     of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
>     even simple comparisons like this could really add up if there are
> enough
>     of them. This point is only of concern for developers of large
> real-time
>     systems.

Most assertions are of this form. Checks that a reference is not void.
As a result they are cheap to evaluate. In other systems you add
such checks into the implementation, so you have to do them anyway.

> 
> I am not saying here that I think DBC is inappropriate for large real-time
> systems. On the contrary, I think that selective application of it might
> be
> of real benefit in some systems. What I am criticizing is the notion that
> DBC
> can be used to basically code the system requirements right into the
> mission
> software ("provably correct S/W courtesy of DBC"). This might be true for
> some
> very small systems, but on the large systems I have worked on I think an
> approach like this would have failed miserably due to the two scaling
> problems
> mentioned above. I do not know how large the software system on Ariane V
> was,
> but I suspect it was too large for an approach like this.

Again not true in practice. You are building large systems out of small
systems. Why shouldn't the requirements of small systems utilise DBC.
You cannot lose anything, only gain.

Some requirements cannot be stated in any language easily as you have
pointed out. Some assertions cannot be easily, or efficiently code in
Eiffel (or other languages)

Before I started using Eiffel, I had similar doubts as you had about
assertions, including the fact that I didn't think programmers would use
them, being lazy, under preasure ... However you do write them, and
think hard about them. This is for the reason I've stated earlier, you
catch mistakes earlier in the development process, and writing them
makes your design clearer.

-- 

Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-31  0:00 Use of DBC as "executable SRS": scaling problems card
@ 1997-07-31  0:00 ` Nick Leaton
  1997-08-01  0:00 ` Don Harrison
  1997-08-13  0:00 ` Lee Webber
  2 siblings, 0 replies; 15+ messages in thread
From: Nick Leaton @ 1997-07-31  0:00 UTC (permalink / raw)



card@syr.lmco.com wrote:
> 
> In my initial post, I said that I thought it would be unwise
> to try to "code" the software requirements specification for a
> complex software component as a set of assertions (boolean
> expressions) due to problems of scale. The reason for this is
> that a requirement stated in the reader's natural language
> (English, French ...) might require a significant number of
> boolean expressions in the software implementation. When that
> requirement is changed, the impact of the change could be
> significant since "dual maintenance" is required- that is, the
> assertions must be checked and changed as necessary as well as
> the method bodies.  Note that it is conceivable that only one
> or the other may need to change. For example, if the requirement
> change resulted in a change to the valid range of an attribute,
> perhaps only the precondition or postcondition would need to
> be modified while the algorithm in the method body could be
> left alone.

I think we have been misunderstanding each other, Eiffel doesn't
attempt to code the very high level requirements as assertions. However
I strongly believe that you should code the lower level.

> In response to my original concerns about scaling, Nick Leaton
> replied:
> 
> >Divide and conquer still applies. One would hope that the component
> >pieces of your system will still small. Using assertions at this level
> >is an advantage.
> 
> Divide-and-concquer (composition) does make the process of building
> complex software more manageable, but it does not necessarily reduce
> the scaling problem. A complex system is still a complex system, and
> composition will not reduce the number of assertions you might need
> to build it (it may even increase the number since decomposing the
> system will produce more inter-class interfaces).

I think it does help, particularly when you inherit the assertions,
which means that you only have them in one place, which dramatically
reduces the number you would have to code in other systems. Out of 
interest, I looked one of my large heirachies. In fact there are fewer
assertions than I thought. A lot are at the top, as expected, and then
on average one or two per class. I tend to have small classes.

> In my discussion of this scaling issue, I wrote the following in
> my original post:
> >In the end, you would have a *very large* number of assertions to keep
> >track of and update as the system's requirements changed. Making sure
> >that the changes in your requirements baseline get into the assertions
> >correctly would be just as difficult as making sure they get into the
> >method bodies correctly, in my view.

There is a difference between the number of assertions that get executed
and the number in the code, again because of the inheritance. The
assertions relating to requirements tend to be in the base classes. 
Keeping things in line, in practice is not as bad as you think. Before I
start using Eiffel, I had the same opinions as you on the matter, but in
practice, it isn't a problem.

> And Nick Leaton replied:
> >Again, having the assertions in place, and having the verification and
> >validation performed automatically as you run makes it more likely that
> >you find the errors early rather than latter.
> 
> As part of our software process here, our software tests must verify
> all of the testable SRS requirements and cover 100% of the branches
> in the code (we use a code coverage tool to verify this). In Ada, the
> only kind of automatically-checked "contract" (if you will) is Ada's
> type-checking mechanism, which ensures that an array index will not be
> out of bounds, a null pointer will not be dereferenced, an illegal type
> cast will not be performed, etc. If any of these "contracts" are
> violated, Ada's Constraint_Error exception is raised.
> 
> Our testing procedure ensures that almost all errors of this type will
> be caught. If we were developing software in Eiffel, we would still
> require the same testing procedures (indeed, if you don't execute a
> branch of the code, the assertions along that branch won't verify
> anything!). As I mentioned earlier, Eiffel's assertions capture more
> of the "behavioral"/semantic aspects of class methods since they are
> not necessarily verifying that the object's attributes are legal values
> (Ada range check), but rather that the object is in the proper *state*.
> Having an exception rasied at the point where our "state machine"
> broke down would give better error localization in many cases. One of
> the common cases is managing an internal data structure like a list
> or tree. In Ada, a procedure to insert an element might mess up the
> list pointers without causing a Constraint_Error exception (i.e. a
> next pointer is accidentally set to null, but the insert procedure
> doesn't dereference this pointer so no exception is raised). The error
> would then not be detected until a later point in time when the list
> was being traversed. In a case like this, a postcondition check would
> have raised the exception in the offending routine, and I agree that
> this is better.
> 
> Either way, though, the branch of code in the hypothetical "insert
> element" method must be executed for its design error to be caught,
> so DBC will not allow the error to be found prior to the software
> test phase (presuming that the error has already gone undetected in
> code reviews, etc).

Exactly, we don't have cost effective theorem provers for general day to
day use. You still have to write code that exercises your system to test
your classes. Same as any other language.

> The other aspect of the scaling problem that I mentioned in my
> original post was the impact that a very large number of assertions
> could have on timing (applicable only to real-time systems):
> >(2) If DBC were used as an "executable SRS", the number of
> >    assertions in the system could very easily grow so large as
> >    to cause deadlines to be missed
>      <snip>
> >    I suppose that most Eiffel programmers would write preconditions
> >    in terms of public attributes (i.e. require (My_Object.Attr_1 =
> >    whatever)), but even simple comparisons like this could really
> >    add up if there are enough of them. This point is only of
> >    concern for developers of large real-time systems.
> 
> Nick Leaton replied:
> >Most assertions are of this form. Checks that a reference is not void.
> >As a result they are cheap to evaluate.
> 
> My point was scale: each one may be "cheap", but what if there are
> a large number of them?

It can be an issue. see latter on how to control assertions.

> The strength of the Eiffel asertion mechanism is that it serves as a
> way to verify that your software "state machine" is behaving as it
> should. However, because assertions are actually verifying the state
> transitions of objects, they cannot be statically "optimized away"
> like Ada's range checks often can. (Most Ada compilers perform
> an analysis of array index operations, assignments, and type conver-
> sions at compile time to determine whether or not a range check is
> required. For this reason, supressing run-time checking on an Ada
> program often has less of an impact than many people would expect).
> Such compile-time optimization is generally not applicable to
> assertions because the object's state is determined by its initial
> state and the sequence of methods that were called on it after
> initialization. To "optimize away" a precondition on an inter-class
> method call, the Eiffel compiler would have to know the client
> object's initial state and every method call that has been made to
> the object! Since this is impossible for all but trivial cases, it is
> therefore safe to say that if you include assertions in your class
> definitions, they will cost you at run-time. If there are a lot of
> them, this could be a problem in a real-time system. (NOTE: I am
> presuming the use of an Eiffel *compiler* here, not an interpreter).
> 
> Nick also added:
> >In other systems you add such checks into the implementation, so
> >you have to do them anyway.
> 
> This is a matter of programming style. In my experience, it is
> generally not done. That is, special "if" statements to check an
> object's state are *usually* not written at the start of the class's
> methods (precodnitions) or at the end of the methods (postconditions).
> I'm not saying this is good, I'm just saying that in my experience
> people do not write code "Eiffel-style" in Ada/FORTRAN/C etc.
> Precondition/postcondition/invariant violations are instead caught
> during testing, as inconsistent/invalid states usually manifest
> themselves via Constraint_Errors, lost/corrupt data, etc. (Again, I
> think Eiffel's assertions offer superior error localization during
> testing). If the testing is rigorous, you can validate your
> software state machine that way and not "pay the run-time price" for
> assertions. If your testing is sparse...

I agree about the lack of people doing this in other languages. Eiffel
supports this, and in practice you write them. It is a huge help in
sorting out a good design if you write them at the the time you design
the classes interface.

> By the way, is it possible to "turn off" Eiffel's assertions after
> the code has been fully tested? (I don't mean making a source code
> change like commenting them out; I'm referring to a compile-time
> option like Ada's "supress range checks").

Yes you can, in more than one way. In the ace file which is the
equivalent of the link/make file you can enable/disable pre/post and
invariant assertions. This can be done without recompiling the code.

You can precompile libraries, and have assertions disabled as above.
So if you have a very robust and tested library, you wouldn't
necessarily want to run with postconditions or invariants, but you
probably would want to run with preconditions on.

> Finally, Don Harrison wrote:
> >I wonder whether there has been some confusion here over the use of
> >"specification" in different contexts. Eiffel people use the term
> >in talking about assertions. Ada people tend to associate it with
> >SRSes.
> 
> Actually, "Ada people" usually use the term to refer to package
> specifications or procedure/function/task specifications. Those of
> us in the defense industry use it for formal SRSes, though, since
> the US DoD usually requires a customer-approved SRS for any soft-
> ware purchased with DoD dollars.
> 
> - Mike
> 
> ---------------
> Michael P. Card
> Lockheed Martin Ocean, Radar and Sensor Systems Division
> Syracuse, NY 13221
> voice: (315)-456-3022  FAX: (315)-456-2414  e-mail:card@syr.lmco.com
> 
> -------------------==== Posted via Deja News ====-----------------------
>       http://www.dejanews.com/     Search, Read, Post to Usenet

-- 

Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
@ 1997-07-31  0:00 card
  1997-07-31  0:00 ` Nick Leaton
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: card @ 1997-07-31  0:00 UTC (permalink / raw)
  To: card


In my initial post, I said that I thought it would be unwise
to try to "code" the software requirements specification for a
complex software component as a set of assertions (boolean
expressions) due to problems of scale. The reason for this is
that a requirement stated in the reader's natural language
(English, French ...) might require a significant number of
boolean expressions in the software implementation. When that
requirement is changed, the impact of the change could be
significant since "dual maintenance" is required- that is, the
assertions must be checked and changed as necessary as well as
the method bodies.  Note that it is conceivable that only one
or the other may need to change. For example, if the requirement
change resulted in a change to the valid range of an attribute,
perhaps only the precondition or postcondition would need to
be modified while the algorithm in the method body could be
left alone.

In response to my original concerns about scaling, Nick Leaton
replied:

>Divide and conquer still applies. One would hope that the component
>pieces of your system will still small. Using assertions at this level
>is an advantage.

Divide-and-concquer (composition) does make the process of building
complex software more manageable, but it does not necessarily reduce
the scaling problem. A complex system is still a complex system, and
composition will not reduce the number of assertions you might need
to build it (it may even increase the number since decomposing the
system will produce more inter-class interfaces).

In my discussion of this scaling issue, I wrote the following in
my original post:
>In the end, you would have a *very large* number of assertions to keep
>track of and update as the system's requirements changed. Making sure
>that the changes in your requirements baseline get into the assertions
>correctly would be just as difficult as making sure they get into the
>method bodies correctly, in my view.

And Nick Leaton replied:
>Again, having the assertions in place, and having the verification and
>validation performed automatically as you run makes it more likely that
>you find the errors early rather than latter.

As part of our software process here, our software tests must verify
all of the testable SRS requirements and cover 100% of the branches
in the code (we use a code coverage tool to verify this). In Ada, the
only kind of automatically-checked "contract" (if you will) is Ada's
type-checking mechanism, which ensures that an array index will not be
out of bounds, a null pointer will not be dereferenced, an illegal type
cast will not be performed, etc. If any of these "contracts" are
violated, Ada's Constraint_Error exception is raised.

Our testing procedure ensures that almost all errors of this type will
be caught. If we were developing software in Eiffel, we would still
require the same testing procedures (indeed, if you don't execute a
branch of the code, the assertions along that branch won't verify
anything!). As I mentioned earlier, Eiffel's assertions capture more
of the "behavioral"/semantic aspects of class methods since they are
not necessarily verifying that the object's attributes are legal values
(Ada range check), but rather that the object is in the proper *state*.
Having an exception rasied at the point where our "state machine"
broke down would give better error localization in many cases. One of
the common cases is managing an internal data structure like a list
or tree. In Ada, a procedure to insert an element might mess up the
list pointers without causing a Constraint_Error exception (i.e. a
next pointer is accidentally set to null, but the insert procedure
doesn't dereference this pointer so no exception is raised). The error
would then not be detected until a later point in time when the list
was being traversed. In a case like this, a postcondition check would
have raised the exception in the offending routine, and I agree that
this is better.

Either way, though, the branch of code in the hypothetical "insert
element" method must be executed for its design error to be caught,
so DBC will not allow the error to be found prior to the software
test phase (presuming that the error has already gone undetected in
code reviews, etc).

The other aspect of the scaling problem that I mentioned in my
original post was the impact that a very large number of assertions
could have on timing (applicable only to real-time systems):
>(2) If DBC were used as an "executable SRS", the number of
>    assertions in the system could very easily grow so large as
>    to cause deadlines to be missed
     <snip>
>    I suppose that most Eiffel programmers would write preconditions
>    in terms of public attributes (i.e. require (My_Object.Attr_1 =
>    whatever)), but even simple comparisons like this could really
>    add up if there are enough of them. This point is only of
>    concern for developers of large real-time systems.

Nick Leaton replied:
>Most assertions are of this form. Checks that a reference is not void.
>As a result they are cheap to evaluate.

My point was scale: each one may be "cheap", but what if there are
a large number of them?

The strength of the Eiffel asertion mechanism is that it serves as a
way to verify that your software "state machine" is behaving as it
should. However, because assertions are actually verifying the state
transitions of objects, they cannot be statically "optimized away"
like Ada's range checks often can. (Most Ada compilers perform
an analysis of array index operations, assignments, and type conver-
sions at compile time to determine whether or not a range check is
required. For this reason, supressing run-time checking on an Ada
program often has less of an impact than many people would expect).
Such compile-time optimization is generally not applicable to
assertions because the object's state is determined by its initial
state and the sequence of methods that were called on it after
initialization. To "optimize away" a precondition on an inter-class
method call, the Eiffel compiler would have to know the client
object's initial state and every method call that has been made to
the object! Since this is impossible for all but trivial cases, it is
therefore safe to say that if you include assertions in your class
definitions, they will cost you at run-time. If there are a lot of
them, this could be a problem in a real-time system. (NOTE: I am
presuming the use of an Eiffel *compiler* here, not an interpreter).

Nick also added:
>In other systems you add such checks into the implementation, so
>you have to do them anyway.

This is a matter of programming style. In my experience, it is
generally not done. That is, special "if" statements to check an
object's state are *usually* not written at the start of the class's
methods (precodnitions) or at the end of the methods (postconditions).
I'm not saying this is good, I'm just saying that in my experience
people do not write code "Eiffel-style" in Ada/FORTRAN/C etc.
Precondition/postcondition/invariant violations are instead caught
during testing, as inconsistent/invalid states usually manifest
themselves via Constraint_Errors, lost/corrupt data, etc. (Again, I
think Eiffel's assertions offer superior error localization during
testing). If the testing is rigorous, you can validate your
software state machine that way and not "pay the run-time price" for
assertions. If your testing is sparse...

By the way, is it possible to "turn off" Eiffel's assertions after
the code has been fully tested? (I don't mean making a source code
change like commenting them out; I'm referring to a compile-time
option like Ada's "supress range checks").

Finally, Don Harrison wrote:
>I wonder whether there has been some confusion here over the use of
>"specification" in different contexts. Eiffel people use the term
>in talking about assertions. Ada people tend to associate it with
>SRSes.

Actually, "Ada people" usually use the term to refer to package
specifications or procedure/function/task specifications. Those of
us in the defense industry use it for formal SRSes, though, since
the US DoD usually requires a customer-approved SRS for any soft-
ware purchased with DoD dollars.

- Mike

---------------
Michael P. Card
Lockheed Martin Ocean, Radar and Sensor Systems Division
Syracuse, NY 13221
voice: (315)-456-3022  FAX: (315)-456-2414  e-mail:card@syr.lmco.com

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-29  0:00 card
  1997-07-30  0:00 ` Nick Leaton
  1997-07-31  0:00 ` W. Wesley Groleau x4923
@ 1997-07-31  0:00 ` Don Harrison
  2 siblings, 0 replies; 15+ messages in thread
From: Don Harrison @ 1997-07-31  0:00 UTC (permalink / raw)



Mike Card wrote:

:I do not think that using DBC as an "executable SRS" (SRS == Software
:Requirements Specification) would have ... 

I wonder whether there has been some confusion here over the use of 
"specification" in different contexts. Eiffel people use the term in talking 
about assertions. Ada people tend to associate it with SRSes.

I'm not sure whether anyone advocated using DBC to replace SRSes. Probably,
OO-structured SRSes have some potential and shouldn't be limited by scale, 
but replacing SRSes entirely by coded assertions may be going a little too far.


Don.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Don Harrison             donh@syd.csa.com.au






^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-29  0:00 card
  1997-07-30  0:00 ` Nick Leaton
@ 1997-07-31  0:00 ` W. Wesley Groleau x4923
  1997-07-31  0:00   ` Al Christians
  1997-07-31  0:00 ` Don Harrison
  2 siblings, 1 reply; 15+ messages in thread
From: W. Wesley Groleau x4923 @ 1997-07-31  0:00 UTC (permalink / raw)



card@syr.lmco.com wrote:
> I do not think that using DBC as an "executable SRS" (SRS == Software
> (1) In a large and complex system, the number of preconditions and 
>     post-conditions in a complex class hierarchy could get extremely large. 

Although they didn't demand that it be "in the code", the inquiry board
did note
  " that the systems specification of the SRI does not indicate
    operational restrictions that emerge from the chosen
    implementation. Such a declaration of limitation, which should
    be mandatory for every mission-critical device, would have
    served to identify any non-compliance with the trajectory
    of Ariane 5. "
which to me means

1. given the requirements (ariane 4)
2. design and implement the solution (the code that failed)
3. document any restrictions which, though not requirements, are
   consequences of the chosen design.

While the results of step three are nearly guaranteed to be 
incomplete, for reasons already beat to death in this discussion,
Bertrand Meyer came close to saying (correctly) that the effort
of doing this _might_ have prevented the failure.  Where he goes 
too far is on two points (now I'm repeating old news):

1. He says "probably would have" instead of "might have"
2. If you're not doing this in Eiffel syntax, you're not
   really doing it.

Now the last sentence will undoubtedly draw "he never said that"
flames, so let me admit that (2) is an oversimplification of
his claims that only Eiffel _really_ has "design by contract"

----------------------------------------------------------------------
    Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS                  Tool-smith Wanna-be

Don't send advertisements to this domain unless asked!  All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or 
the United States government.  Using email to store YOUR advertising 
on them is trespassing!
----------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-31  0:00 ` W. Wesley Groleau x4923
@ 1997-07-31  0:00   ` Al Christians
  0 siblings, 0 replies; 15+ messages in thread
From: Al Christians @ 1997-07-31  0:00 UTC (permalink / raw)



Meyer's book seems to allow two kinds of preconditions, those that are
explicitly included in the Require clause, and those that must be
omitted, usually either for reasons of prohibitive redundancy with the
mainline logic or impossiblity of evaluation before the requested
service is actually attempted.  So he does see some use for return codes 
(for example from external database access attempts) and for exceptions
thrown from the middle of a method when the method finally figures out
that it can't do what was requested.  

So despite any suggestions or optimism, putting everything into explicit
preconditions, postconditions, and invariants is not a 100% solution. 
There will often be a buried implicit precondition somewhere.  If DBC is
99% effective, does that make the hidden precondition much more
dangerous because of an illusion of explicitness and reliability
supported by DBC?  

Al




W. Wesley Groleau x4923 wrote:
> 
> card@syr.lmco.com wrote:
> > I do not think that using DBC as an "executable SRS" (SRS == Software
> > (1) In a large and complex system, the number of preconditions and
> >     post-conditions in a complex class hierarchy could get extremely large.
> 
> Although they didn't demand that it be "in the code", the inquiry board
> did note
>   " that the systems specification of the SRI does not indicate
>     operational restrictions that emerge from the chosen
>     implementation. Such a declaration of limitation, which should
>     be mandatory for every mission-critical device, would have
>     served to identify any non-compliance with the trajectory
>     of Ariane 5. "




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-30  0:00 ` Nick Leaton
@ 1997-07-31  0:00   ` Ken Garlington
  1997-08-11  0:00     ` Nick Leaton
  0 siblings, 1 reply; 15+ messages in thread
From: Ken Garlington @ 1997-07-31  0:00 UTC (permalink / raw)



Nick Leaton wrote:
> 
> Divide and conquer still applies. One would hope that the component
> pieces of your system will still small. Using assertions at this level
> is an advantage.

Unfortunately, "divide and conquer" leaves open the issue of interface
validation. In addition to showing the components are valid internally,
you must also show their interactions are valid. DBC may be able to
detect (at run-time) when an invalid interaction occurs, but to
_prevent_
the invalid interaction requires much system-wide knowledge. This is
particularly true of the Ariane 5 case, where something *outside the
system* (the flight profile) was the root cause of the error. Consider
a chain of events like:

	environment generates change
	sensor senses change
	I/O device reads sensor
	software gets value from I/O device
	software does set of calculations (including generation of error term)
	error term is converted to output format [failure here]

To determine that the failure would occur, the manual reviewer must
consider the full chain to trace the relationship of the environment
definition
to the out-of-range conversion. This is easy during execution (assuming
you
have a good definition of the environment), but hard for a human. And
you have
to do this for every thread (more or less). No one will tell you in
advance
"the error term conversion is where the error is, now just figure out
why."

> 
>  I
> > was
> >     a member of a team which built a real-time database manager for the
> > Seawolf
> >     submarine. This software component had over 300 requirements, and we
> > had to
> >     verify almost all of them by test (there were a *very small* number
> > that
> >     were verified by inspection). These tests included timing tests that
> > were
> >     conducted on the actual hardware that was delivered to the Seawolf.
> >
> >     In a large system like this, the verification and validation (V & V)
> >     process is difficult. It is hard to keep track of 300+ requirements,
> >     especially since those requirements frequently change during the
> >     development of the system! (It took a team of 3+ people about 2.5
> > years
> >     to build and verify the Seawolf's database manager). If the
> > requirements
> >     are expressed in code that is part of your method signatures, the task
> >     will not (IMHO) get any easier. In fact, it would likely get worse
> > since
> >     there would be additional assertions on low-level "utility" classes
> >     that are not part of the system's requirements.  These would also have
> > to
> >     be verified against another set of requirements.
> 
> Assertions help with verification and validation.
> 
> 1) Violations get caught early in the design process.

See above.

> 2) Writting assertions forces you to consider if the requirements
>    are well formed. If you can't write an assertion, has the requirement
>    been stated in an unambiguous way? Is it realisable? etc ...

However, they cannot usually detect _missing_ requirements (Ariane 5
case).

> 3) There are requirements that cannot be coded as assertions. For
>    example a bank may have a requirement that the software should
>    try and prevent the bank becoming bankrupt. Perfectly normal
>    requirement. The software can help perform this task. However,
>    stated as such, you cannot code it. You have a problem testing it.

Actually, it's pretty easy, assuming you have a good definition of the
bank environment and can run adequate scenarios. It is difficult, on the
other hand, to have a system be able to internalize all of the factors
regarding its environment (no system can fully describe itself, much
less
itself and its environment!)

> 4) Is having assertions on utility classes and verifying them, writing
>    requirements for them a problem?
> 
> >     In the end, you would have a *very large* number of assertions to keep
> >     track of and update as the system's requirements changed. Making sure
> >     that the changes in your requirements baseline get into the assertions
> >     correctly would be just as difficult as making sure they get into the
> >     method bodies correctly, in my view.
> 
> Again, having the assertions in place, and having the verification and
> validation performed automatically as you run makes it more likely that
> you find the errors early rather than latter.

First, "running" is not early in the development process!

Second, it seems that there is a schism in the DBC community. Some feel
that
testing (execution of the code) is the important part, while others
emphasize
manual verification. It makes it very difficult to discuss the
advantages of
DBC, when the response to "it's difficult to manually verify" is "it
helps catch
errors during test" and the response to "it's difficult to use
assertions to test"
is "it helps catch errors during manual verification."

> 
> >     The chance for an error under these conditions is not lessened by DBC.
> 
> But the impact of it is, as you are more likely to catch it earlier.
> Remember, you don't have to write the assertions. ie. Eiffel is giving
> you an extra tool.

This is the other schism: Arguments about DBC are addressed in Eiffel
terms,
and arguments about Eiffel are addressed in DBC terms ("DBC does not
require Eiffel", for example).

> 
> > (2) If DBC were used as an "executable SRS", the number of assertions in
> > the
> >     system could very easily grow so large as to cause deadlines to be
> > missed.
> 
> Assertions are inherited. It is common for base classes to have lots of
> assertions, deferred features (virtual in C++ parlance). Inherit classes
> don't have many as they have inherited the assertions.
> 
> >     I suppose that most Eiffel programmers would write preconditions in
> > terms
> >     of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
> >     even simple comparisons like this could really add up if there are
> > enough
> >     of them. This point is only of concern for developers of large
> > real-time
> >     systems.
> 
> Most assertions are of this form. Checks that a reference is not void.

However, most requirements are not in this form. Does this mean that
most
requirements won't be written as assertions?

Furthermore, I can get assertions of this form without DBC/Eiffel (e.g.
in
Ada, under any methodology). What's special about DBC if it causes you
to write code just to get checks you get for free in other languages?

(BTW, this is the third schism. Compare and contrast the arguments by
other
DBC/Eiffel advocates that most assertions are "high-level" abstractions,
making them easier to read. I'm pretty much convinced at this point that
DBC, like the infamous "object-oriented," can mean just about anything
to
anyone!)

> As a result they are cheap to evaluate. In other systems you add
> such checks into the implementation, so you have to do them anyway.
> 
> >
> > I am not saying here that I think DBC is inappropriate for large real-time
> > systems. On the contrary, I think that selective application of it might
> > be
> > of real benefit in some systems. What I am criticizing is the notion that
> > DBC
> > can be used to basically code the system requirements right into the
> > mission
> > software ("provably correct S/W courtesy of DBC"). This might be true for
> > some
> > very small systems, but on the large systems I have worked on I think an
> > approach like this would have failed miserably due to the two scaling
> > problems
> > mentioned above. I do not know how large the software system on Ariane V
> > was,
> > but I suspect it was too large for an approach like this.
> 
> Again not true in practice. You are building large systems out of small
> systems. Why shouldn't the requirements of small systems utilise DBC.
> You cannot lose anything, only gain.
> 
> Some requirements cannot be stated in any language easily as you have
> pointed out. Some assertions cannot be easily, or efficiently code in
> Eiffel (or other languages)
> 
> Before I started using Eiffel, I had similar doubts as you had about
> assertions, including the fact that I didn't think programmers would use
> them, being lazy, under preasure ... However you do write them, and
> think hard about them. This is for the reason I've stated earlier, you
> catch mistakes earlier in the development process, and writing them
> makes your design clearer.
> 
> --
> 
> Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-31  0:00 Use of DBC as "executable SRS": scaling problems card
  1997-07-31  0:00 ` Nick Leaton
@ 1997-08-01  0:00 ` Don Harrison
  1997-08-13  0:00 ` Lee Webber
  2 siblings, 0 replies; 15+ messages in thread
From: Don Harrison @ 1997-08-01  0:00 UTC (permalink / raw)



Mike Card wrote:

:Finally, Don Harrison wrote:
:>I wonder whether there has been some confusion here over the use of
:>"specification" in different contexts. Eiffel people use the term
:>in talking about assertions. Ada people tend to associate it with
:>SRSes.
:
:Actually, "Ada people" usually use the term to refer to package
:specifications or procedure/function/task specifications. 

That's a comfort to hear. :)

:Those of us in the defense industry use it for formal SRSes, though, since
:the US DoD usually requires a customer-approved SRS for any soft-
:ware purchased with DoD dollars.

Same deal here in Australia.


Don.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Don Harrison             donh@syd.csa.com.au






^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-31  0:00   ` Ken Garlington
@ 1997-08-11  0:00     ` Nick Leaton
  1997-08-11  0:00       ` Ken Garlington
  0 siblings, 1 reply; 15+ messages in thread
From: Nick Leaton @ 1997-08-11  0:00 UTC (permalink / raw)



Ken Garlington wrote:
> 
> Nick Leaton wrote:
> >
> > Divide and conquer still applies. One would hope that the component
> > pieces of your system will still small. Using assertions at this level
> > is an advantage.
> 
> Unfortunately, "divide and conquer" leaves open the issue of interface
> validation. In addition to showing the components are valid internally,
> you must also show their interactions are valid. DBC may be able to
> detect (at run-time) when an invalid interaction occurs, but to
> _prevent_
> the invalid interaction requires much system-wide knowledge. This is
> particularly true of the Ariane 5 case, where something *outside the
> system* (the flight profile) was the root cause of the error. Consider
> a chain of events like:
> 
>         environment generates change
>         sensor senses change
>         I/O device reads sensor
>         software gets value from I/O device
>         software does set of calculations (including generation of error term)
>         error term is converted to output format [failure here]
> 
> To determine that the failure would occur, the manual reviewer must
> consider the full chain to trace the relationship of the environment
> definition
> to the out-of-range conversion. This is easy during execution (assuming
> you
> have a good definition of the environment), but hard for a human. And
> you have
> to do this for every thread (more or less). No one will tell you in
> advance
> "the error term conversion is where the error is, now just figure out
> why."

I think I get your logic. Where would you expect the error to be raised?

1) Getting the value? (Postcondition)
2) Sending the value to the calculator (Precondition)
3) In the conversion routine (Precondition)

I would believe you should be writting all. In practice they are
probably written
by different people, so make the assumptions transparent and executable
is good
practice.

....


> > 2) Writting assertions forces you to consider if the requirements
> >    are well formed. If you can't write an assertion, has the requirement
> >    been stated in an unambiguous way? Is it realisable? etc ...
> 
> However, they cannot usually detect _missing_ requirements (Ariane 5
> case).

Yes. Do you have a method that does?

> > 3) There are requirements that cannot be coded as assertions. For
> >    example a bank may have a requirement that the software should
> >    try and prevent the bank becoming bankrupt. Perfectly normal
> >    requirement. The software can help perform this task. However,
> >    stated as such, you cannot code it. You have a problem testing it.
> 
> Actually, it's pretty easy, assuming you have a good definition of the
> bank environment and can run adequate scenarios. It is difficult, on the
> other hand, to have a system be able to internalize all of the factors
> regarding its environment (no system can fully describe itself, much
> less
> itself and its environment!)

Hmmm, ok so we are now on to my home ground, even though I'm a pilot.
It isn't to code up these requirements, even running scenarios which
some banks
do (like the one I work for! ;-) ) Main reason is that it is very hard
sometimes to
come up with a workable set of assumptions on which to base your
scenarios. 
Lots of banks have lossed lots of money making assumptions that are not
valid.

> > 4) Is having assertions on utility classes and verifying them, writing
> >    requirements for them a problem?
> >
> > >     In the end, you would have a *very large* number of assertions to keep
> > >     track of and update as the system's requirements changed. Making sure
> > >     that the changes in your requirements baseline get into the assertions
> > >     correctly would be just as difficult as making sure they get into the
> > >     method bodies correctly, in my view.
> >
> > Again, having the assertions in place, and having the verification and
> > validation performed automatically as you run makes it more likely that
> > you find the errors early rather than latter.
> 
> First, "running" is not early in the development process!

Running the whole system, OK. 'Running' parts of the system? I want
to run my code as quickly as possible, in order to validate the design.
This
is not contrary to 'accepted' practice. I want to develop quickly, so I
want
to get to test quickly.

> Second, it seems that there is a schism in the DBC community. Some feel
> that
> testing (execution of the code) is the important part, while others
> emphasize
> manual verification. It makes it very difficult to discuss the
> advantages of
> DBC, when the response to "it's difficult to manually verify" is "it
> helps catch
> errors during test" and the response to "it's difficult to use
> assertions to test"
> is "it helps catch errors during manual verification."


Manual verification is a weak argument IMHO. However in inspecting code
to 
see what it does in order to use the code, it is very useful. Execution
is
very important. As aid to clear good design, I have found it to be an
excellent
aid to quality code.

> >
> > >     The chance for an error under these conditions is not lessened by DBC.
> >
> > But the impact of it is, as you are more likely to catch it earlier.
> > Remember, you don't have to write the assertions. ie. Eiffel is giving
> > you an extra tool.
> 
> This is the other schism: Arguments about DBC are addressed in Eiffel
> terms,
> and arguments about Eiffel are addressed in DBC terms ("DBC does not
> require Eiffel", for example).
>

Agreed, it just makes it easier to produce code with DBC, and so you are 
more likely to produce code containing the assertions. DBC in C++ is
very awkward,
and from what I have been reading about ADA, the situation is vastly
superior to 
C++, but there are some drawbacks for replication the inheritance of
assertions. 
(see some previous messages)

Out of interest, do you think DBC is a bad concept, or are your
arguments mainly
a C++ vs Eiffel or anti the Ariane article?

> >
> > > (2) If DBC were used as an "executable SRS", the number of assertions in
> > > the
> > >     system could very easily grow so large as to cause deadlines to be
> > > missed.
> >
> > Assertions are inherited. It is common for base classes to have lots of
> > assertions, deferred features (virtual in C++ parlance). Inherit classes
> > don't have many as they have inherited the assertions.
> >
> > >     I suppose that most Eiffel programmers would write preconditions in
> > > terms
> > >     of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
> > >     even simple comparisons like this could really add up if there are
> > > enough
> > >     of them. This point is only of concern for developers of large
> > > real-time
> > >     systems.
> >
> > Most assertions are of this form. Checks that a reference is not void.
> 
> However, most requirements are not in this form. Does this mean that
> most
> requirements won't be written as assertions?

I disagree. You may not have a requirement explicitly written in your 
design document. Let us say you have an object from an artificial
horizon,
and that, for arguments sake, it contains three values for pitch, yaw
and bank.
A method/routine that has one of these objects as a parameter has a
requirment
that it is passed such an object. (Existance and type conformity). Not
many requirements documents will explicitly state such a requirment,
although
they may state the requirement that the object is optional if that were
the case.

If your requirements are more complex, then break them down into smaller
requirements,
divide and conquer.

> Furthermore, I can get assertions of this form without DBC/Eiffel (e.g.
> in
> Ada, under any methodology). What's special about DBC if it causes you
> to write code just to get checks you get for free in other languages?

DBC is not language specific. Eiffel as a language makes it easy to use
DBC.

How do you get inheritance of assertions in C++, Ada for free?

> (BTW, this is the third schism. Compare and contrast the arguments by
> other
> DBC/Eiffel advocates that most assertions are "high-level" abstractions,
> making them easier to read. I'm pretty much convinced at this point that
> DBC, like the infamous "object-oriented," can mean just about anything
> to
> anyone!)

I agree with the point about high-level nature of assertions. Looking
though
code that I have written this is the case. Classes with a large number
of 
assertions are abstract base classes of heirachies. This is not
surprising.

...

-- 

Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-08-11  0:00     ` Nick Leaton
@ 1997-08-11  0:00       ` Ken Garlington
  1997-08-12  0:00         ` Jon S Anthony
  1997-08-12  0:00         ` Nick Leaton
  0 siblings, 2 replies; 15+ messages in thread
From: Ken Garlington @ 1997-08-11  0:00 UTC (permalink / raw)



Nick Leaton wrote:
> 
[snip]
> > Unfortunately, "divide and conquer" leaves open the issue of interface
> > validation. In addition to showing the components are valid internally,
> > you must also show their interactions are valid. DBC may be able to
> > detect (at run-time) when an invalid interaction occurs, but to
> > _prevent_
> > the invalid interaction requires much system-wide knowledge. This is
> > particularly true of the Ariane 5 case, where something *outside the
> > system* (the flight profile) was the root cause of the error. Consider
> > a chain of events like:
> >
> >         environment generates change
> >         sensor senses change
> >         I/O device reads sensor
> >         software gets value from I/O device
> >         software does set of calculations (including generation of error term)
> >         error term is converted to output format [failure here]
> >
> > To determine that the failure would occur, the manual reviewer must
> > consider the full chain to trace the relationship of the environment
> > definition
> > to the out-of-range conversion. This is easy during execution (assuming
> > you
> > have a good definition of the environment), but hard for a human. And
> > you have
> > to do this for every thread (more or less). No one will tell you in
> > advance
> > "the error term conversion is where the error is, now just figure out
> > why."
> 
> I think I get your logic. Where would you expect the error to be raised?
> 
> 1) Getting the value? (Postcondition)
> 2) Sending the value to the calculator (Precondition)
> 3) In the conversion routine (Precondition)
> 
> I would believe you should be writting all. In practice they are
> probably written
> by different people, so make the assumptions transparent and executable
> is good
> practice.

"Executable," unfortunately, doesn't help the reviewer, which was the
subject of my discussion. It also doesn't help if the invalid input is
not seen prior to operation.

I don't know exactly what you mean by "transparent," but I assume you
mean "easy to read." However, that's not the issue. Consider the
proposed Ariane-5 Eiffel solution. The assertion #3 ( X <= Maximum )
is certainly easy to read. Now, prove whether or not X _will_
exceed Maximum for the Ariane-5. That's much more complicated,
since there may be a great deal of complexity hiding between the
system environment inputs and the value of X when it reaches this
assertion. Note that assertion #1 may be based on different criteria
than #3, so #1 may pass while #3 fail. So, adding more assertions
doesn't help here; in fact, it dilutes the review effort.

> 
> ....
> 
> > > 2) Writting assertions forces you to consider if the requirements
> > >    are well formed. If you can't write an assertion, has the requirement
> > >    been stated in an unambiguous way? Is it realisable? etc ...
> >
> > However, they cannot usually detect _missing_ requirements (Ariane 5
> > case).
> 
> Yes. Do you have a method that does?

Actually, there are such techniques. Failure Modes and Effects Testing
is very powerful for determining systems that are missing critical
fault-detection and fault-tolerance requirements, for example.

> 
> > > 3) There are requirements that cannot be coded as assertions. For
> > >    example a bank may have a requirement that the software should
> > >    try and prevent the bank becoming bankrupt. Perfectly normal
> > >    requirement. The software can help perform this task. However,
> > >    stated as such, you cannot code it. You have a problem testing it.
> >
> > Actually, it's pretty easy, assuming you have a good definition of the
> > bank environment and can run adequate scenarios. It is difficult, on the
> > other hand, to have a system be able to internalize all of the factors
> > regarding its environment (no system can fully describe itself, much
> > less
> > itself and its environment!)
> 
> Hmmm, ok so we are now on to my home ground, even though I'm a pilot.
> It isn't to code up these requirements, even running scenarios which
> some banks
> do (like the one I work for! ;-) ) Main reason is that it is very hard
> sometimes to
> come up with a workable set of assumptions on which to base your
> scenarios.
> Lots of banks have lossed lots of money making assumptions that are not
> valid.
> 
> > > 4) Is having assertions on utility classes and verifying them, writing
> > >    requirements for them a problem?
> > >
> > > >     In the end, you would have a *very large* number of assertions to keep
> > > >     track of and update as the system's requirements changed. Making sure
> > > >     that the changes in your requirements baseline get into the assertions
> > > >     correctly would be just as difficult as making sure they get into the
> > > >     method bodies correctly, in my view.
> > >
> > > Again, having the assertions in place, and having the verification and
> > > validation performed automatically as you run makes it more likely that
> > > you find the errors early rather than latter.
> >
> > First, "running" is not early in the development process!
> 
> Running the whole system, OK. 'Running' parts of the system? I want
> to run my code as quickly as possible, in order to validate the design.
> This
> is not contrary to 'accepted' practice. I want to develop quickly, so I
> want
> to get to test quickly.

See the problem above. To see if #3 really is going to fail, you need
the
real calculation of X from the system input all the way to the
conversion.
A subset of the thread may pass while the full thread fails.

> 
> > Second, it seems that there is a schism in the DBC community. Some feel
> > that
> > testing (execution of the code) is the important part, while others
> > emphasize
> > manual verification. It makes it very difficult to discuss the
> > advantages of
> > DBC, when the response to "it's difficult to manually verify" is "it
> > helps catch
> > errors during test" and the response to "it's difficult to use
> > assertions to test"
> > is "it helps catch errors during manual verification."
> 
> Manual verification is a weak argument IMHO. However in inspecting code
> to
> see what it does in order to use the code, it is very useful. Execution
> is
> very important. As aid to clear good design, I have found it to be an
> excellent
> aid to quality code.

While, on the other hand, Mr. Meyer and others say the strongest part
of assertions is manual verification. Who's right?

> 
> > >
> > > >     The chance for an error under these conditions is not lessened by DBC.
> > >
> > > But the impact of it is, as you are more likely to catch it earlier.
> > > Remember, you don't have to write the assertions. ie. Eiffel is giving
> > > you an extra tool.
> >
> > This is the other schism: Arguments about DBC are addressed in Eiffel
> > terms,
> > and arguments about Eiffel are addressed in DBC terms ("DBC does not
> > require Eiffel", for example).
> >
> 
> Agreed, it just makes it easier to produce code with DBC, and so you are
> more likely to produce code containing the assertions. DBC in C++ is
> very awkward,
> and from what I have been reading about ADA, the situation is vastly
> superior to
> C++, but there are some drawbacks for replication the inheritance of
> assertions.
> (see some previous messages)
> 
> Out of interest, do you think DBC is a bad concept, or are your
> arguments mainly
> a C++ vs Eiffel or anti the Ariane article?

I'm still not convinced there is a unified understanding of DBC. The
more I read from proponents, DBC seems to be used as a buzzword like
OO, with a lot of different implementation approaches. I can hardly be
against something that doesn't have an accepted definition!

Am I against good documentation? No. Am I against well-written
interfaces?
No. Do I think DBC means the same thing as either of these? I'm not sure
yet.

I do think any methodology that concentrates on code is suspect,
however. This is why it bothers me that DBC discussions quickly lead to
"here's how you code it in Eiffel" and "you catch that problem during
execution."

> 
> > >
> > > > (2) If DBC were used as an "executable SRS", the number of assertions in
> > > > the
> > > >     system could very easily grow so large as to cause deadlines to be
> > > > missed.
> > >
> > > Assertions are inherited. It is common for base classes to have lots of
> > > assertions, deferred features (virtual in C++ parlance). Inherit classes
> > > don't have many as they have inherited the assertions.
> > >
> > > >     I suppose that most Eiffel programmers would write preconditions in
> > > > terms
> > > >     of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
> > > >     even simple comparisons like this could really add up if there are
> > > > enough
> > > >     of them. This point is only of concern for developers of large
> > > > real-time
> > > >     systems.
> > >
> > > Most assertions are of this form. Checks that a reference is not void.
> >
> > However, most requirements are not in this form. Does this mean that
> > most
> > requirements won't be written as assertions?
> 
> I disagree. You may not have a requirement explicitly written in your
> design document. 

By definition, you should _never_ have a requirement explicitly written
in your design document, at least from the end-user perspective. How
many
users say, "I want a word processor, and one of the critical
requirements
is that it check for void references!"

> Let us say you have an object from an artificial
> horizon,
> and that, for arguments sake, it contains three values for pitch, yaw
> and bank.

That's a requirement, definitely.

> A method/routine that has one of these objects as a parameter has a
> requirment
> that it is passed such an object.

This sounds like design. As a pilot, did you ever call up Sporty's
Pilot Shop and say, "I'd like to buy an avionics suite where, when
the artificial horizon method is passed to an method, it's
existence and type conformity is checked"?

> (Existance and type conformity). Not
> many requirements documents will explicitly state such a requirment,
> although
> they may state the requirement that the object is optional if that were
> the case.
> 
> If your requirements are more complex, then break them down into smaller
> requirements,
> divide and conquer.

It's not a question of complexity... it's a question of motivation. For
example, assuming you capture requirements as use cases, would you
expect
to see much discussion of void pointers in the average application?

> 
> > Furthermore, I can get assertions of this form without DBC/Eiffel (e.g.
> > in
> > Ada, under any methodology). What's special about DBC if it causes you
> > to write code just to get checks you get for free in other languages?
> 
> DBC is not language specific. Eiffel as a language makes it easy to use
> DBC.

However, proponents of Eiffel say that DBC cannot be _adequately_
implemented
in other languages.

> 
> How do you get inheritance of assertions in C++, Ada for free?

How can DBC be language independent if it requires specific langauge
features?

> 
> > (BTW, this is the third schism. Compare and contrast the arguments by
> > other
> > DBC/Eiffel advocates that most assertions are "high-level" abstractions,
> > making them easier to read. I'm pretty much convinced at this point that
> > DBC, like the infamous "object-oriented," can mean just about anything
> > to
> > anyone!)
> 
> I agree with the point about high-level nature of assertions. Looking
> though
> code that I have written this is the case. Classes with a large number
> of
> assertions are abstract base classes of heirachies. This is not
> surprising.

An assertion at the top of a hierarchy is not necessarily what I had
in mind as "high-level." Compare and contrast:

  1. Horizontal_Bias <= Maximum_Bias (low level)

  2. The system shall operate in the flight envelope shown in Figure 1.
     (high level, difficult to write as a simple line of code, but
     the real issue at the system level).

For more information on all this, see:

  http://www.flash.net/~kennieg/ariane.html

where these arguments get hashed out as much as I know how.

> 
> ...
> 
> --
> 
> Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-08-11  0:00       ` Ken Garlington
  1997-08-12  0:00         ` Jon S Anthony
@ 1997-08-12  0:00         ` Nick Leaton
  1997-08-12  0:00           ` Ken Garlington
  1 sibling, 1 reply; 15+ messages in thread
From: Nick Leaton @ 1997-08-12  0:00 UTC (permalink / raw)



Ken Garlington wrote:

[snip]

> > I would believe you should be writting all. In practice they are
> > probably written
> > by different people, so make the assumptions transparent and executable
> > is good
> > practice.
> 
> "Executable," unfortunately, doesn't help the reviewer, which was the
> subject of my discussion. It also doesn't help if the invalid input is
> not seen prior to operation.
> 
> I don't know exactly what you mean by "transparent," but I assume you
> mean "easy to read." However, that's not the issue. Consider the
> proposed Ariane-5 Eiffel solution. The assertion #3 ( X <= Maximum )
> is certainly easy to read. Now, prove whether or not X _will_
> exceed Maximum for the Ariane-5. That's much more complicated,
> since there may be a great deal of complexity hiding between the
> system environment inputs and the value of X when it reaches this
> assertion. Note that assertion #1 may be based on different criteria
> than #3, so #1 may pass while #3 fail. So, adding more assertions
> doesn't help here; in fact, it dilutes the review effort.

By transparent in the case of Eiffel assertions I mean that they
are clearly visible to the reader of the code. 

1) They are in set places
2) They are in the code
3) They have a standard form (logical expression)

Compare this to the situation where they are in some other
documentation.

1) They are not in set places. They may be on one project, but they
   are not in set places for all projects

2) They may be in the code, but as comments.

3) They don't have a standard form. They probably are in English.      
   English is prone to ambiguity. Having coded assertions in any
   language a least make sure they compile, and so are in some
   standard form.

So Eiffel helps 1 and 2. 3 is applicable to any language.

> >
> > > > 2) Writting assertions forces you to consider if the requirements
> > > >    are well formed. If you can't write an assertion, has the requirement
> > > >    been stated in an unambiguous way? Is it realisable? etc ...
> > >
> > > However, they cannot usually detect _missing_ requirements (Ariane 5
> > > case).
> >
> > Yes. Do you have a method that does?
> 
> Actually, there are such techniques. Failure Modes and Effects Testing
> is very powerful for determining systems that are missing critical
> fault-detection and fault-tolerance requirements, for example.

Nothing stops you using this.


Coming back to the general point, DBC only adds to the software
development process. It is optional. However, I do agree with Meyer in
that you ignore it at your peril. If you are worried about timing being
critical, which is the only valid point that I have read, I have some
concerns. Writing safety critical software where getting something to
work within a set period, and you don't have a large safety margin to
spare worrying. Similarly if you have to rely on software timings
extensively. Getting faster chips strikes me as a better idea, or
waiting till they become available. If you can't, you have a research
project not a production system.

I would be interested in knowing more about the patterns that you use
to handle exceptions when they happen.

-- 

Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-08-11  0:00       ` Ken Garlington
@ 1997-08-12  0:00         ` Jon S Anthony
  1997-08-12  0:00         ` Nick Leaton
  1 sibling, 0 replies; 15+ messages in thread
From: Jon S Anthony @ 1997-08-12  0:00 UTC (permalink / raw)



In article <33EFD297.3663@flash.net> Ken Garlington <kennieg@flash.net> writes:

> An assertion at the top of a hierarchy is not necessarily what I had
> in mind as "high-level." Compare and contrast:
> 
>   1. Horizontal_Bias <= Maximum_Bias (low level)
> 
>   2. The system shall operate in the flight envelope shown in Figure 1.
>      (high level, difficult to write as a simple line of code, but
>      the real issue at the system level).
       ^^^^^^^^^^^^^^

Bingo.  That's kinda sorta it in a nutshell.  Except that it is not
just "difficult", it is basically _impossible_.  Why it is that some
of the "fanatics" here, can't seem to understand this is pretty much
incomprehensible (and given the particular context, outright scary).

/Jon
-- 
Jon Anthony
OMI, Belmont, MA 02178, 617.484.3383 
"Nightmares - Ha!  The way my life's been going lately,
 Who'd notice?"  -- Londo Mollari




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-08-12  0:00         ` Nick Leaton
@ 1997-08-12  0:00           ` Ken Garlington
  0 siblings, 0 replies; 15+ messages in thread
From: Ken Garlington @ 1997-08-12  0:00 UTC (permalink / raw)



Nick Leaton wrote:
> 
> Ken Garlington wrote:
> 
> [snip]
> 
> > > I would believe you should be writting all. In practice they are
> > > probably written
> > > by different people, so make the assumptions transparent and executable
> > > is good
> > > practice.
> >
> > "Executable," unfortunately, doesn't help the reviewer, which was the
> > subject of my discussion. It also doesn't help if the invalid input is
> > not seen prior to operation.
> >
> > I don't know exactly what you mean by "transparent," but I assume you
> > mean "easy to read." However, that's not the issue. Consider the
> > proposed Ariane-5 Eiffel solution. The assertion #3 ( X <= Maximum )
> > is certainly easy to read. Now, prove whether or not X _will_
> > exceed Maximum for the Ariane-5. That's much more complicated,
> > since there may be a great deal of complexity hiding between the
> > system environment inputs and the value of X when it reaches this
> > assertion. Note that assertion #1 may be based on different criteria
> > than #3, so #1 may pass while #3 fail. So, adding more assertions
> > doesn't help here; in fact, it dilutes the review effort.
> 
[snip of why Eiffel assertions are easy to read - "transparent"]

How does this address the issue I raised (see quoted block above)?
Note that "transparency" is not the issue! See section 3.1.4 (in fact,
all of section 3.1) of
  http://www.flash.net/~kennieg/ariane.html

> > > > > 2) Writting assertions forces you to consider if the requirements
> > > > >    are well formed. If you can't write an assertion, has the requirement
> > > > >    been stated in an unambiguous way? Is it realisable? etc ...
> > > >
> > > > However, they cannot usually detect _missing_ requirements (Ariane 5
> > > > case).
> > >
> > > Yes. Do you have a method that does?
> >
> > Actually, there are such techniques. Failure Modes and Effects Testing
> > is very powerful for determining systems that are missing critical
> > fault-detection and fault-tolerance requirements, for example.
> 
> Nothing stops you using this.

Actually, using assertions might stop you from doing this. See section
3.1.6 of
  http://www.flash.net/~kennieg/ariane.html

> Coming back to the general point, DBC only adds to the software
> development process. It is optional. However, I do agree with Meyer in
> that you ignore it at your peril. If you are worried about timing being
> critical, which is the only valid point that I have read, I have some
> concerns.

There are more adverse consequences. See section 3.2.2 of
  http://www.flash.net/~kennieg/ariane.html

> Writing safety critical software where getting something to
> work within a set period, and you don't have a large safety margin to
> spare worrying. Similarly if you have to rely on software timings
> extensively. Getting faster chips strikes me as a better idea, or
> waiting till they become available. If you can't, you have a research
> project not a production system.

Unfortunately, flight systems such as the Ariane 5 have various
limitations
(e.g. environmental factors like available power) that may limit their
choices of hardware. (I should add this to my paper, I guess.)

I agree that it would be nice to be able to ignore such factors, but
(un)fortunately that's what makes embedded systems programming different
from other types of software development.

Note also that the issue is not that real-time systems are designed such
that they "rely on software timings extensively." The issue is testing
your system to show that it does _not_ inadvertantly rely on process
timings (e.g., race conditions). See section 3.2.2 of
  http://www.flash.net/~kennieg/ariane.html

> I would be interested in knowing more about the patterns that you use
> to handle exceptions when they happen.

That's part of the problem. Such patterns are difficult to develop.
There
are various approaches that can be used, but few have any particularly
firm proofs as to their contribution to the overall reliability of the
system. Some of this is discussed in section 3.3 of
  http://www.flash.net/~kennieg/ariane.htm

> 
> --
> 
> Nick




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Use of DBC as "executable SRS": scaling problems
  1997-07-31  0:00 Use of DBC as "executable SRS": scaling problems card
  1997-07-31  0:00 ` Nick Leaton
  1997-08-01  0:00 ` Don Harrison
@ 1997-08-13  0:00 ` Lee Webber
  2 siblings, 0 replies; 15+ messages in thread
From: Lee Webber @ 1997-08-13  0:00 UTC (permalink / raw)



On Thu, 31 Jul 1997 08:41:32 -0600, card@syr.lmco.com wrote:

>In my initial post, I said that I thought it would be unwise
>to try to "code" the software requirements specification for a
>complex software component as a set of assertions (boolean
>expressions) due to problems of scale.... When that
>requirement is changed, the impact of the change could be
>significant since "dual maintenance" is required- that is, the
>assertions must be checked and changed as necessary as well as
>the method bodies.

Note that the same criticism could be levelled at
double-entry bookeeping.




^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~1997-08-13  0:00 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-07-31  0:00 Use of DBC as "executable SRS": scaling problems card
1997-07-31  0:00 ` Nick Leaton
1997-08-01  0:00 ` Don Harrison
1997-08-13  0:00 ` Lee Webber
  -- strict thread matches above, loose matches on Subject: below --
1997-07-29  0:00 card
1997-07-30  0:00 ` Nick Leaton
1997-07-31  0:00   ` Ken Garlington
1997-08-11  0:00     ` Nick Leaton
1997-08-11  0:00       ` Ken Garlington
1997-08-12  0:00         ` Jon S Anthony
1997-08-12  0:00         ` Nick Leaton
1997-08-12  0:00           ` Ken Garlington
1997-07-31  0:00 ` W. Wesley Groleau x4923
1997-07-31  0:00   ` Al Christians
1997-07-31  0:00 ` Don Harrison

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox