comp.lang.ada
 help / color / mirror / Atom feed
* Re: Use of DBC as "executable SRS": scaling problems
@ 1997-07-31  0:00 card
  1997-07-31  0:00 ` Nick Leaton
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: card @ 1997-07-31  0:00 UTC (permalink / raw)
  To: card


In my initial post, I said that I thought it would be unwise
to try to "code" the software requirements specification for a
complex software component as a set of assertions (boolean
expressions) due to problems of scale. The reason for this is
that a requirement stated in the reader's natural language
(English, French ...) might require a significant number of
boolean expressions in the software implementation. When that
requirement is changed, the impact of the change could be
significant since "dual maintenance" is required- that is, the
assertions must be checked and changed as necessary as well as
the method bodies.  Note that it is conceivable that only one
or the other may need to change. For example, if the requirement
change resulted in a change to the valid range of an attribute,
perhaps only the precondition or postcondition would need to
be modified while the algorithm in the method body could be
left alone.

In response to my original concerns about scaling, Nick Leaton
replied:

>Divide and conquer still applies. One would hope that the component
>pieces of your system will still small. Using assertions at this level
>is an advantage.

Divide-and-concquer (composition) does make the process of building
complex software more manageable, but it does not necessarily reduce
the scaling problem. A complex system is still a complex system, and
composition will not reduce the number of assertions you might need
to build it (it may even increase the number since decomposing the
system will produce more inter-class interfaces).

In my discussion of this scaling issue, I wrote the following in
my original post:
>In the end, you would have a *very large* number of assertions to keep
>track of and update as the system's requirements changed. Making sure
>that the changes in your requirements baseline get into the assertions
>correctly would be just as difficult as making sure they get into the
>method bodies correctly, in my view.

And Nick Leaton replied:
>Again, having the assertions in place, and having the verification and
>validation performed automatically as you run makes it more likely that
>you find the errors early rather than latter.

As part of our software process here, our software tests must verify
all of the testable SRS requirements and cover 100% of the branches
in the code (we use a code coverage tool to verify this). In Ada, the
only kind of automatically-checked "contract" (if you will) is Ada's
type-checking mechanism, which ensures that an array index will not be
out of bounds, a null pointer will not be dereferenced, an illegal type
cast will not be performed, etc. If any of these "contracts" are
violated, Ada's Constraint_Error exception is raised.

Our testing procedure ensures that almost all errors of this type will
be caught. If we were developing software in Eiffel, we would still
require the same testing procedures (indeed, if you don't execute a
branch of the code, the assertions along that branch won't verify
anything!). As I mentioned earlier, Eiffel's assertions capture more
of the "behavioral"/semantic aspects of class methods since they are
not necessarily verifying that the object's attributes are legal values
(Ada range check), but rather that the object is in the proper *state*.
Having an exception rasied at the point where our "state machine"
broke down would give better error localization in many cases. One of
the common cases is managing an internal data structure like a list
or tree. In Ada, a procedure to insert an element might mess up the
list pointers without causing a Constraint_Error exception (i.e. a
next pointer is accidentally set to null, but the insert procedure
doesn't dereference this pointer so no exception is raised). The error
would then not be detected until a later point in time when the list
was being traversed. In a case like this, a postcondition check would
have raised the exception in the offending routine, and I agree that
this is better.

Either way, though, the branch of code in the hypothetical "insert
element" method must be executed for its design error to be caught,
so DBC will not allow the error to be found prior to the software
test phase (presuming that the error has already gone undetected in
code reviews, etc).

The other aspect of the scaling problem that I mentioned in my
original post was the impact that a very large number of assertions
could have on timing (applicable only to real-time systems):
>(2) If DBC were used as an "executable SRS", the number of
>    assertions in the system could very easily grow so large as
>    to cause deadlines to be missed
     <snip>
>    I suppose that most Eiffel programmers would write preconditions
>    in terms of public attributes (i.e. require (My_Object.Attr_1 =
>    whatever)), but even simple comparisons like this could really
>    add up if there are enough of them. This point is only of
>    concern for developers of large real-time systems.

Nick Leaton replied:
>Most assertions are of this form. Checks that a reference is not void.
>As a result they are cheap to evaluate.

My point was scale: each one may be "cheap", but what if there are
a large number of them?

The strength of the Eiffel asertion mechanism is that it serves as a
way to verify that your software "state machine" is behaving as it
should. However, because assertions are actually verifying the state
transitions of objects, they cannot be statically "optimized away"
like Ada's range checks often can. (Most Ada compilers perform
an analysis of array index operations, assignments, and type conver-
sions at compile time to determine whether or not a range check is
required. For this reason, supressing run-time checking on an Ada
program often has less of an impact than many people would expect).
Such compile-time optimization is generally not applicable to
assertions because the object's state is determined by its initial
state and the sequence of methods that were called on it after
initialization. To "optimize away" a precondition on an inter-class
method call, the Eiffel compiler would have to know the client
object's initial state and every method call that has been made to
the object! Since this is impossible for all but trivial cases, it is
therefore safe to say that if you include assertions in your class
definitions, they will cost you at run-time. If there are a lot of
them, this could be a problem in a real-time system. (NOTE: I am
presuming the use of an Eiffel *compiler* here, not an interpreter).

Nick also added:
>In other systems you add such checks into the implementation, so
>you have to do them anyway.

This is a matter of programming style. In my experience, it is
generally not done. That is, special "if" statements to check an
object's state are *usually* not written at the start of the class's
methods (precodnitions) or at the end of the methods (postconditions).
I'm not saying this is good, I'm just saying that in my experience
people do not write code "Eiffel-style" in Ada/FORTRAN/C etc.
Precondition/postcondition/invariant violations are instead caught
during testing, as inconsistent/invalid states usually manifest
themselves via Constraint_Errors, lost/corrupt data, etc. (Again, I
think Eiffel's assertions offer superior error localization during
testing). If the testing is rigorous, you can validate your
software state machine that way and not "pay the run-time price" for
assertions. If your testing is sparse...

By the way, is it possible to "turn off" Eiffel's assertions after
the code has been fully tested? (I don't mean making a source code
change like commenting them out; I'm referring to a compile-time
option like Ada's "supress range checks").

Finally, Don Harrison wrote:
>I wonder whether there has been some confusion here over the use of
>"specification" in different contexts. Eiffel people use the term
>in talking about assertions. Ada people tend to associate it with
>SRSes.

Actually, "Ada people" usually use the term to refer to package
specifications or procedure/function/task specifications. Those of
us in the defense industry use it for formal SRSes, though, since
the US DoD usually requires a customer-approved SRS for any soft-
ware purchased with DoD dollars.

- Mike

---------------
Michael P. Card
Lockheed Martin Ocean, Radar and Sensor Systems Division
Syracuse, NY 13221
voice: (315)-456-3022  FAX: (315)-456-2414  e-mail:card@syr.lmco.com

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet




^ permalink raw reply	[flat|nested] 15+ messages in thread
* Use of DBC as "executable SRS": scaling problems
@ 1997-07-29  0:00 card
  1997-07-30  0:00 ` Nick Leaton
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: card @ 1997-07-29  0:00 UTC (permalink / raw)
  To: card


During the Ariane thread, some stated or implied that Eiffel's inclusion
of
preconditions and postconditions in method signatures and invariants in
class
specifications would have made the software more verifiable since the
software requirements would have effectively been coded right into the
class
definitions.

I do not think that using DBC as an "executable SRS" (SRS == Software
Requirements Specification) would have necessarily saved the Ariane V (I
am
not going to revisit the issues of testing & verification, etc). The
reasons I
think this way are not due to any inherent flaws in the DBC concept
itself,
but rather are due to problems of scale:

(1) In a large and complex system, the number of preconditions and post-
    conditions in a complex class hierarchy could get extremely large. I
was
    a member of a team which built a real-time database manager for the
Seawolf
    submarine. This software component had over 300 requirements, and we
had to
    verify almost all of them by test (there were a *very small* number
that
    were verified by inspection). These tests included timing tests that
were
    conducted on the actual hardware that was delivered to the Seawolf.

    In a large system like this, the verification and validation (V & V)
    process is difficult. It is hard to keep track of 300+ requirements,
    especially since those requirements frequently change during the
    development of the system! (It took a team of 3+ people about 2.5
years
    to build and verify the Seawolf's database manager). If the
requirements
    are expressed in code that is part of your method signatures, the task
    will not (IMHO) get any easier. In fact, it would likely get worse
since
    there would be additional assertions on low-level "utility" classes
    that are not part of the system's requirements.  These would also have
to
    be verified against another set of requirements.

    In the end, you would have a *very large* number of assertions to keep
    track of and update as the system's requirements changed. Making sure
    that the changes in your requirements baseline get into the assertions
    correctly would be just as difficult as making sure they get into the
    method bodies correctly, in my view.

    The chance for an error under these conditions is not lessened by DBC.

(2) If DBC were used as an "executable SRS", the number of assertions in
the
    system could very easily grow so large as to cause deadlines to be
missed.
    I suppose that most Eiffel programmers would write preconditions in
terms
    of public attributes (i.e. require (My_Object.Attr_1 = whatever)), but
    even simple comparisons like this could really add up if there are
enough
    of them. This point is only of concern for developers of large
real-time
    systems.

I am not saying here that I think DBC is inappropriate for large real-time
systems. On the contrary, I think that selective application of it might
be
of real benefit in some systems. What I am criticizing is the notion that
DBC
can be used to basically code the system requirements right into the
mission
software ("provably correct S/W courtesy of DBC"). This might be true for
some
very small systems, but on the large systems I have worked on I think an
approach like this would have failed miserably due to the two scaling
problems
mentioned above. I do not know how large the software system on Ariane V
was,
but I suspect it was too large for an approach like this.

-------------------==== Posted via Deja News ====-----------------------
      http://www.dejanews.com/     Search, Read, Post to Usenet




^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~1997-08-13  0:00 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-07-31  0:00 Use of DBC as "executable SRS": scaling problems card
1997-07-31  0:00 ` Nick Leaton
1997-08-01  0:00 ` Don Harrison
1997-08-13  0:00 ` Lee Webber
  -- strict thread matches above, loose matches on Subject: below --
1997-07-29  0:00 card
1997-07-30  0:00 ` Nick Leaton
1997-07-31  0:00   ` Ken Garlington
1997-08-11  0:00     ` Nick Leaton
1997-08-11  0:00       ` Ken Garlington
1997-08-12  0:00         ` Nick Leaton
1997-08-12  0:00           ` Ken Garlington
1997-08-12  0:00         ` Jon S Anthony
1997-07-31  0:00 ` Don Harrison
1997-07-31  0:00 ` W. Wesley Groleau x4923
1997-07-31  0:00   ` Al Christians

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox