comp.lang.ada
 help / color / mirror / Atom feed
* Run-time checking and speed
@ 1995-01-10 22:20 Tony Leavitt
  1995-01-12  1:14 ` Roger Labbe
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Tony Leavitt @ 1995-01-10 22:20 UTC (permalink / raw)


I have a question about developing Ada code that still has all of
its normal run-time checking and is "fast."  Is there an execution
difference with repsect to run-time checking in any of the following.
Or, is there "a good real-time style" for this type of stuff? 



-- CASE 1
-- This case seems to need all of the full run-time checking since
-- the variables x and y can be unlimited when accessing the array.

XY_array : array (integer range 1..100, integer range 1..100) of float ;

for x in 1..100 loop
  for y in 1..100 loop
     XY_array(x,y) := calc_something  ;
   end loop ;
end loop ;



-- CASE 2
-- This case it seems that no runtime checking is needed when
-- accessing the array since x and y are, by definition, within
-- the array bounds (assuming memory doesn't go bad while running).

subtype arraybounds is integer range 1..100 ;

XY_array : array (arraybounds, arraybounds) of float ;

for x in arraybounds loop
  for y in arraybounds loop
     XY_array(x,y) := calc_something  ;
   end loop ;
end loop ;



-- CASE 3
-- This seems similar to CASE 2

XY_array : array (integer range 1..100, integer range 1..100) of float ;

for x in XY_array'Range(1) loop
  for y in XY_array'Range(2) loop
     XY_array(x,y) := calc_something  ;
   end loop ;
end loop ;




I've used loops since they are simple, but other issues are the same.
For example, I could have a function compute the array indices and return
either integers or arraybounds.  If I return arraybounds, it seems that
the run-time check would occur at assigning the arraybounds and not when
accessing the array.

TIA,

-- 

Tony Leavitt

Email: tony@gelac.lasc.lockheed.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-10 22:20 Run-time checking and speed Tony Leavitt
@ 1995-01-12  1:14 ` Roger Labbe
  1995-01-13 12:09   ` Philip Brashear
       [not found] ` <3f0prq$3bq@theopolis.orl.mmc.com>
  1995-01-12 15:11 ` Norman H. Cohen
  2 siblings, 1 reply; 18+ messages in thread
From: Roger Labbe @ 1995-01-12  1:14 UTC (permalink / raw)


Tony posts 3 variants on a nested loop and asks what differences
there are in run time efficiency.

Tony, it depends on the efficiency of your compiler. All the 
three examples you gave can be type checked at compile time and
so a good compiler would not generate type checking code. If any
example would confuse the comiler it would be the first one, which
doesn't define subtypes for the range 1..100. Still, the compiler I
regularly use figures out that no type checking is needed.

In general, it is difficult to predict what code a compiler will
generate unless you look at a lot of assembly code generated by
the compiler. For example, we would all expect that copying
arrays by slices would be faster than by a for loop since the
compiler can optimize the former with a move block instructions.
However, the compiler I use often generates MUCH faster code
for the for loop. Why? Unless copying a simple type such a 
integer or float it calls a routine that copies the data
bit by bit (yes, you read that right!). 

That's an egregious example; my point is if unless you plan to
never reuse your software or use a different compiler (or version
of the same compiler) you can't really optimimize at the source code
level. Most of the preferred coding practices, such as strong
type checking, copying by slices (despite my contrary example),
record assignment using aggregrates, etc., usually lead to the
best assembly because they give the most information to the compiler.

I can't speak for other real time developers, but I code for 
maintainability first and speed about last. Speed becomes a factor
only in small pieces of code, and this code is usually isolated
to i/o or heavy duty computation. These pieces of code are most
likely already isolated (encapsulated) if you have a good design,
so optimizing the small piece of code is fairly straightforward.
A standard piece of advice re optimizing is "make it right, then make
it fast." Use all the features of Ada you deem suitable to make
it right; you can always recode slow parts later, and you will have
the original code to both document your new (probably obscure) code
and to test its correctness.

I have a feeling I might have strayed from the intent of your question.
There is obviously a lot more to say about real time design, and
things aren't as simple as the last paragraph suggests. An excellent
strategy is to combine rapid prototyping with an evolutionary approach
to quickly find the significant limitations of your machine/compiler/tool/
programmer mix. You can then focus on those problems early in the
design cycle, and use the ideas in the previous paragraph for less
than critical problems. I promise that which of three types of for
loops are the most efficient will be in the noise level in most aspects
of your work. In the few cases where it is significant you can code
all three cases and see which is best. Recognize when you get a new
version of your compiler that work may be invalidated, so don't chase
unattainable goals.

If I missed your point or you have further questions, just ask.
Obviously this is a subject that interests me.

Roger Labbe



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
       [not found] ` <3f0prq$3bq@theopolis.orl.mmc.com>
@ 1995-01-12 14:13   ` Robert Dewar
  1995-01-13  1:49     ` Doug Smith
                       ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Robert Dewar @ 1995-01-12 14:13 UTC (permalink / raw)


T.E.D. says

"As for constraint checks, try to leave them in until your product is
THOROUGHLY debugged. They are worth their weight in gold during the
debugging phase."

I would be stronger, I would say leave them in for the production code.
Knuth once said that removing checks for the production code is like
using life-jackets for training outings, and then leaving them at home
for the final trans-atlantic sailing on the grounds that they add too
much weight.

Of course in some cases, the delivered code does not run fast enough, and
you have to cut corners to meet processing efficiency requirements, and in
this situation you may have to abandon the checks, but do this only when 
you have to, not as a matter of course. I have seen too many Ada environments
in which the assumption is that checks will be turned off for production code.

The penalty of run time checks can run anywhere from 5-50% depending on the
code. Enthusiastic vendors will tell you that it is never more than 10%, but
that depends very much on your code and it may be higher. But suppose that
it is 20%. It is amazing how many programs really don't care about 20%, and
it is often the case that next months version of the processor chip will make
up the 20% anyway.

Don't get me wrong, I quite appreciate that there are situations, particulary
embedded situations with critical deadlines running on junk old hardware,
where processing deadlines are indeed very pushed, and there may be some
general applications where speed is pushed (a one day weather forcasing
program that takes 2 days to run is not much use to anyone :-)

So I quite appreciate that checks must be turned off in some cases, I just
argue that this should not be the default assumption.

Another situation where it is quite appropriate to turn off run time checks
is in verified safety critical code where part of the verification ensures
that the checks are not needed. You have to be *very* sure of your 
verification technology to depend on this!




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-10 22:20 Run-time checking and speed Tony Leavitt
  1995-01-12  1:14 ` Roger Labbe
       [not found] ` <3f0prq$3bq@theopolis.orl.mmc.com>
@ 1995-01-12 15:11 ` Norman H. Cohen
  2 siblings, 0 replies; 18+ messages in thread
From: Norman H. Cohen @ 1995-01-12 15:11 UTC (permalink / raw)


In article <3ev16u$ojc@pong.lasc.lockheed.com>, tony@deepthought.Sgi.COM
(Tony Leavitt) writes: 

|> -- CASE 1
|> -- This case seems to need all of the full run-time checking since
|> -- the variables x and y can be unlimited when accessing the array.
|>
|> XY_array : array (integer range 1..100, integer range 1..100) of float ;
|>
|> for x in 1..100 loop
|>   for y in 1..100 loop
|>      XY_array(x,y) := calc_something  ;
|>    end loop ;
|> end loop ;

No, the loop bounds are given by compile-time constants, so any compiler
that leaves the run-time checks in (at least with optimization turned on)
is a junk compiler.  (By the way, on each iteration of the inner loop, x
and y are CONSTANTS, so there is no possibility of "calc_something"
changing their values by a side effect.  x and y are well defined to
range over 1 .. 100.)

|> -- CASE 2
|> -- This case it seems that no runtime checking is needed when
|> -- accessing the array since x and y are, by definition, within
|> -- the array bounds (assuming memory doesn't go bad while running).
|>
|> subtype arraybounds is integer range 1..100 ;
|>
|> XY_array : array (arraybounds, arraybounds) of float ;
|>
|> for x in arraybounds loop
|>   for y in arraybounds loop
|>      XY_array(x,y) := calc_something  ;
|>    end loop ;
|> end loop ;

This example is precisely equivalent to the previous one.  The subtype
declaration essentially makes the identifier "arraybounds" a shorthand
for the subtype indication "integer range 1 .. 100" (and the iteration
scheme "for x in 1 .. 100" is a shorthand for "for x in integer range
1 .. 100").

|> -- CASE 3
|> -- This seems similar to CASE 2
|>
|> XY_array : array (integer range 1..100, integer range 1..100) of float ;
|>
|> for x in XY_array'Range(1) loop
|>   for y in XY_array'Range(2) loop
|>      XY_array(x,y) := calc_something  ;
|>    end loop ;
|> end loop ;

Again equivalent.  Given the declaration of XY_array, XY_array'Range(1)
is essentially a shorthand for "1 .. 100".

General rule: Trust your optimizing compiler to eliminate the checks that
are obviously unnecessary (or get a new compiler if this trust turns out
to be misplaced).  The only ones you have to worry about are the ones
that are unobviously unnecessary, and even then only if you have observed
a performance problem and determined that the run-time check is a
signficant contributor to it.

Here's a more interesting example: 

   type Index_Subtype is Integer range 1 .. 100;
   A   : array (Index_Subtype, Index_Subtype) of Float;
   Row : Index_Subtype;
   ...
   for Column in A'Range(2) loop
      A(Row, Column) := calc_something;
   end loop;

The value of Row is guaranteed to be in range PROVIDED THAT Row HAS BEEN
ASSIGNED A VALUE.  Ada 83 rules allowed compilers to omit the index check
on Row, on the grounds that if Row was not initialized execution was
"erroneous" and any behavior was allowed to ensue.  It was controversial
whether a compiler SHOULD omit this check.  In Ada 95, the check can
still be eliminated if the compiler can determine that Row has been
assigned a value before the loop is reached.  In any event, a good
compiler will move the check out of the loop so that it is only done
once.

--
Norman H. Cohen    ncohen@watson.ibm.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
@ 1995-01-12 15:54 Keith Arthurs
  0 siblings, 0 replies; 18+ messages in thread
From: Keith Arthurs @ 1995-01-12 15:54 UTC (permalink / raw)



In article <3ev16u$ojc@pong.lasc.lockheed.com>,
 on 10 Jan 1995 22:20:14 GMT,
 Tony Leavitt <tony@deepthought.Sgi.COM> writes:
>I have a question about developing Ada code that still has all of
>its normal run-time checking and is "fast."  Is there an execution
>difference with repsect to run-time checking in any of the following.
>Or, is there "a good real-time style" for this type of stuff?
>
>
>-- CASE 1
>-- This case seems to need all of the full run-time checking since
>-- the variables x and y can be unlimited when accessing the array.
>
>XY_array : array (integer range 1..100, integer range 1..100) of float ;
>
>for x in 1..100 loop
>  for y in 1..100 loop
>     XY_array(x,y) := calc_something  ;
>   end loop ;
>end loop ;
>

  In the above case, the compiler may be smart enough to optimize out the
index range check since the loops are hardcoded within a valid range.  But
I'm assuming you're more interested in what happens when you index an array
using a variable that has a larger range constraint than the array index
constraint.  In this case all compilers should perform an index constraint
check.

>
>
>-- CASE 2
>-- This case it seems that no runtime checking is needed when
>-- accessing the array since x and y are, by definition, within
>-- the array bounds (assuming memory doesn't go bad while running).
>
>subtype arraybounds is integer range 1..100 ;
>
>XY_array : array (arraybounds, arraybounds) of float ;
>
>for x in arraybounds loop
>  for y in arraybounds loop
>     XY_array(x,y) := calc_something  ;
>   end loop ;
>end loop ;
>
>

  In this case, or cases where you are indexing an array using a variable of
the same constraints as the array bounds, some compilers will optimize out
the constraint check.  I have a problem with this type of optimization
because I have seen two types of cases where this can cause problems.

problem case 1:

 x, y : boolean;  -- UNINITIALIZED!!!! The variables could be out of range!

 ...
 x := y;
 if x=TRUE then
   text_io.put_line("TRUE");
 elsif x=FALSE then
   text_io.put_line("FALSE");
 else
   text_io.put_line("x is neither TRUE nor FALSE!"); -- Shouldn't happen,
                                                     -- but does!
 end if;

In this case, there is no range check on the assignment of x with the
uninitialized value in y.  If y was out of range of FALSE..TRUE, then
the third case of the if would be executed, with no constraint_errors
being raised.


Problem case 2:

  function get_element (x : index_t) return element_t is
  begin
    return (element_array (x));
  end get_element;

  In this case, element_array has the index constraints of index_t, so no
range check is performed.  If the function is called with an uninitialized
variable, anything could happen.  I've seen this core dump!  Not a very
graceful way for an Ada program to terminate.  Also not a very safe
behaviour for safety critical software that contains exception handlers to
prevent the program from terminating.


Keith Arthurs
Software Engineer Specialist - McDonnell Douglas Corporation

/home/karthurs => cd /usr/bin/standard/disclaimer
/usr/bin/standard/disclaimer => grep "nions" *
The opinions expressed here do not reflect those of MDC.
then proceed to dice the onions into little pieces and
/usr/bin/standard/disclaimer =>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-12 14:13   ` Robert Dewar
@ 1995-01-13  1:49     ` Doug Smith
  1995-01-13 15:29       ` Norman H. Cohen
  1995-01-13 15:21     ` Norman H. Cohen
       [not found]     ` <3fa2pk$kbi@felix.seas.gwu.edu>
  2 siblings, 1 reply; 18+ messages in thread
From: Doug Smith @ 1995-01-13  1:49 UTC (permalink / raw)


In article <3f3deb$4us@gnat.cs.nyu.edu>, dewar@cs.nyu.edu (Robert Dewar) wrote:

> T.E.D. says
> 
> "As for constraint checks, try to leave them in until your product is
> THOROUGHLY debugged. They are worth their weight in gold during the
> debugging phase."
> 
> [snip]
> 
> [snip...]to meet processing efficiency [...] you may have to abandon the
> checks, but do this only when you have to [...snip]
> 
> The penalty of run time checks can run anywhere from 5-50% depending on the
> code. [snip]
> 
> [snip]
> 
> Another situation where it is quite appropriate to turn off run time checks
> is in verified safety critical code where part of the verification ensures
> that the checks are not needed. You have to be *very* sure of your 
> verification technology to depend on this!

OK, some personal experience which has been verified countless times by
fellow engineers. A small percentage (10%) of the code accounts for (90%)
of the processing. In case after case, a dynamic analysis tool identifies
the code executed the most, the engineer optimizes and gets almost an order
of magnitude speed improvement.

In the old days (i.e. early compilers), I measured executable size differences
that showed half of the code was constraint checking (Today's compilers do
a much better job). By redesigning the code to explicitly check for and
raise the correct exceptions, I could then suppress the compiler generated
constraint checking.

Formal verification was helpful in increasing the confidence level of me and my fellow engineers; but since this was not a safety critical application, there weren't many arguments when they saw the performance results.

-- 
Doug Smith
dsmith@clark.net



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-12  1:14 ` Roger Labbe
@ 1995-01-13 12:09   ` Philip Brashear
  0 siblings, 0 replies; 18+ messages in thread
From: Philip Brashear @ 1995-01-13 12:09 UTC (permalink / raw)



As for the relative efficiency of loop coding styles (and the clear dependence
on the particular compilation system), this is the kind of detailed information
that the Ada Compiler Evaluation System (ACES) can provide.  Check the
directory "public/aces", accessed by anonymous FTP at the "ajpo.sei.cmu.edu"
site (or its imminent replacement, "sw-eng.falls-church.va.us").

Other questions regarding ACES?  Contact me.

Phil Brashear
CTA INCORPORATED
5100 Springfield Pike, Suite 100
Dayton, OH 45431
(513) 258-0831
brashepw@adawc.wpafb.af.mil
brashear@sw-eng.falls-church.va.us
ACES support contractor for the High Order Language Control Facility,
Wright-Patterson Air Force Base



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-12 14:13   ` Robert Dewar
  1995-01-13  1:49     ` Doug Smith
@ 1995-01-13 15:21     ` Norman H. Cohen
       [not found]     ` <3fa2pk$kbi@felix.seas.gwu.edu>
  2 siblings, 0 replies; 18+ messages in thread
From: Norman H. Cohen @ 1995-01-13 15:21 UTC (permalink / raw)


In article <3f3deb$4us@gnat.cs.nyu.edu>, dewar@cs.nyu.edu (Robert Dewar)
writes: 

|> Another situation where it is quite appropriate to turn off run time checks
|> is in verified safety critical code where part of the verification ensures
|> that the checks are not needed. You have to be *very* sure of your
|> verification technology to depend on this!

In the famous article "Social Processes and Proofs of Theorems and
Programs" (Communications of the ACM, May 1979), DeMillo, Lipton, and
Perlis wrote: 

         There is a tendency, as we begin to feel that a structure is
    logically, provably right, to remove from it whatever redundancies
    we originally built in because of lack of understanding.  Taken to
    its extreme, this tendency brings on the so-called Titanic effect;
    when failure does occur, it is massive and uncontrolled.  To put it
    another way, the severity with which a system fails is directly
    proportional to the intensity of the designer's belief that it cannot
    fail.  Programs designed to be clean and tidy merely so that they can
    be verified will be particularly susceptible to the Titanic effect.
    Already we see signs of this phenomenon.  In their notes on Euclid, a
    language designed for program verification, several of the foremost
    verification adherents say, "Because we expect all Euclid programs to
    be verified, we have not made special provisions for exception
    handling.... Runtime software errors should not occur in verified
    programs."  Errors should not occur?  Shades of the ship that
    shouldn't be sunk.

The remark about "programs designed to be clean and tidy merely so that
they can be verified" is off base--such programs are more easily
understood by their writers and are therefore more reliable.  However, we
should take seriously the warning about reliance on verification
technology (which has not advanced appreciably in the 16 years since the
article was written) in place of run-time checks.

(We should be equally wary, of course, of reliance on run-time checks in
place of reasoning about the correctness of a program.  And given that
someone is going to remove checks anyway, a formal proof that it is safe
to do so is to be welcomed.  At his summits with Gorbachev, Reagan was
fond of quoting the alleged Russian proverb, "Trust, but verify." When it
comes to ensuring the safety of a program, this would be better worded,
"Verify, but don't trust.")

--
Norman H. Cohen    ncohen@watson.ibm.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-13  1:49     ` Doug Smith
@ 1995-01-13 15:29       ` Norman H. Cohen
  0 siblings, 0 replies; 18+ messages in thread
From: Norman H. Cohen @ 1995-01-13 15:29 UTC (permalink / raw)


In article <dsmith-1201952049540001@dsmith-ppp.clark.net>, dsmith@clark.net
(Doug Smith) writes: 

|> OK, some personal experience which has been verified countless times by
|> fellow engineers. A small percentage (10%) of the code accounts for (90%)
|> of the processing. In case after case, a dynamic analysis tool identifies
|> the code executed the most, the engineer optimizes and gets almost an order
|> of magnitude speed improvement.
|>
|> In the old days (i.e. early compilers), I measured executable size differences
|> that showed half of the code was constraint checking (Today's compilers do
|> a much better job). By redesigning the code to explicitly check for and
|> raise the correct exceptions, I could then suppress the compiler generated
|> constraint checking.

Static measurements of the proportion of instructions in an object file
devoted to run-time checks can be very misleading.  A good compiler will
move many checks out of inner loops, so that, for the reasons explained
in the first paragraph above, the proportion of run-time-check
instructions among the instructions actually executed will be much
smaller.

--
Norman H. Cohen    ncohen@watson.ibm.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
       [not found]         ` <3fjhrj$9b3@oahu.cs.ucla.edu>
@ 1995-01-20  5:11           ` Robert Dewar
  1995-01-23 16:43             ` Mats Weber
  0 siblings, 1 reply; 18+ messages in thread
From: Robert Dewar @ 1995-01-20  5:11 UTC (permalink / raw)


In response to Jay's concern that global suppres may be suicidal due to
some piece of code relying on checks for normal flow of control. He
calls such code "pinhead" code but that is unfair.

In particular, it does not seem wrong to rely on constraint error to
raise constraint error. Suppose the spec of some operation requires
constraint error to be raised if some condition is true. I am talking
about a user packagee here. Then you may not want this to be suppressed,
yet it may well be the case that you use constraint checking to achieve
the constraint checking so to speak.

An examlple is raising time_error in calendar. You probably do not want
this suppressed by turning checks off, yet in Ada 95, calendar has been
carefully tweaked to invite you to get time error by raising a constraint
error on a normal arithmetic operation (this was not possible in Ada 83
due to minor glitches).

In GNAT, we have introduced an implementation defined pragma Unsuppress
that undoes a previous suppression, including one applied by command
line argument, precisely to deal with this kind of situation, so a module
that REALLY needs checks should contain a pragma Unsuppress.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
       [not found]       ` <EACHUS.95Jan17151835@spectre.mitre.org>
       [not found]         ` <3fjhrj$9b3@oahu.cs.ucla.edu>
@ 1995-01-22 18:43         ` Michael Feldman
  1995-01-23 23:38           ` Robert Dewar
  1 sibling, 1 reply; 18+ messages in thread
From: Michael Feldman @ 1995-01-22 18:43 UTC (permalink / raw)


In article <EACHUS.95Jan17151835@spectre.mitre.org>,
Robert I. Eachus <eachus@spectre.mitre.org> wrote:

> > Yes, compilers sometimes have command-line switch to suppress _all_
> > checks, _everywhere_. IMHO, using this is downright foolish.
>
>   Wow!  I never thought I would be saying something like this, but:
>
>   No, Mike, the switch is very useful during development.  It allows
>you to quickly determine which compilation units may deserve further
>study when doing this sort of optimization.

[snip]

>     So keep the switch, just require that it not be used in delivered
>code.  ;-)

Good point, Bob; I hadn't thought of it.  It;'s actually the opposite
of what one usually sees advocated: checks _on_ during development, then
_off_ for production. Uhhh, no.

It's interesting to contemplate just how little is to be saved by 
suppressing checks in a well-designed program that uses the (sub-)type 
system to advantage. The myth has always been that Ada does all these 
"extra" checks, but of course the number of checks is inversely proportional 
in some sense to the intelligent use of the type system. 

That being the case, because the compiler has (we hope) limited the checks
to those are really needed, the difference in suppressing those checks 
should be at the margin.

Mike Feldman



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-20  5:11           ` Robert Dewar
@ 1995-01-23 16:43             ` Mats Weber
  1995-01-24 19:25               ` Robert Dewar
  0 siblings, 1 reply; 18+ messages in thread
From: Mats Weber @ 1995-01-23 16:43 UTC (permalink / raw)


In article <3fngl9$4iv@gnat.cs.nyu.edu>, dewar@cs.nyu.edu (Robert Dewar) wrote:

> In GNAT, we have introduced an implementation defined pragma Unsuppress
> that undoes a previous suppression, including one applied by command
> line argument, precisely to deal with this kind of situation, so a module
> that REALLY needs checks should contain a pragma Unsuppress.

Very interesting. Can Suppress and Unsuppress be nested ? ;-)

Mats



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-22 18:43         ` Michael Feldman
@ 1995-01-23 23:38           ` Robert Dewar
  1995-01-26 16:14             ` Kent Mitchell
       [not found]             ` <3gbr4f$p4b@theopolis.orl.mmc.com>
  0 siblings, 2 replies; 18+ messages in thread
From: Robert Dewar @ 1995-01-23 23:38 UTC (permalink / raw)


Mike, you miss the point that in some environments it is REQUIRED to turn
off runtime checking.

Why, because runtime checking can create code that cannot be executed, and
in some verification environments coverage testing is required, so you cannot
have code and logic paths that cannot be executed.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-23 16:43             ` Mats Weber
@ 1995-01-24 19:25               ` Robert Dewar
  0 siblings, 0 replies; 18+ messages in thread
From: Robert Dewar @ 1995-01-24 19:25 UTC (permalink / raw)


Yes, Suppress and Unsuppress can be fully nested, and for example you can
suppress all checks of some kind with suppress, and then selectively
unsuppress some of them.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-23 23:38           ` Robert Dewar
@ 1995-01-26 16:14             ` Kent Mitchell
  1995-01-28  6:03               ` Robert Dewar
       [not found]             ` <3gbr4f$p4b@theopolis.orl.mmc.com>
  1 sibling, 1 reply; 18+ messages in thread
From: Kent Mitchell @ 1995-01-26 16:14 UTC (permalink / raw)


Robert Dewar (dewar@cs.nyu.edu) wrote:
: Mike, you miss the point that in some environments it is REQUIRED to turn
: off runtime checking.

: Why, because runtime checking can create code that cannot be executed, and
: in some verification environments coverage testing is required, so you cannot
: have code and logic paths that cannot be executed.

Can you explain how runtime checking can create code that cannot be
executed.  It would seem that if the checking code is in there is *is*
going to do the check.  I've worked on a lot of "safety critical" software
projects and I've never seen the checks turned off because of the reason
you state (though I've seen a lot of other reasons).

--
Kent Mitchell                   | One possible reason that things aren't
Technical Consultant            | going according to plan is .....
Rational Software Corporation   | that there never *was* a plan!



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-26 16:14             ` Kent Mitchell
@ 1995-01-28  6:03               ` Robert Dewar
  0 siblings, 0 replies; 18+ messages in thread
From: Robert Dewar @ 1995-01-28  6:03 UTC (permalink / raw)


Kent Mitchell asks why one might need to turn off checks in certified code.
Sure the checks will be executed, but presumably you are doing the checks
so that if the check fails, then you do something.

It is the something that is a problem.

Suppose you do constraint error checking in some routine, just to catch
incorrect input from somewhere else in the program, and you have a handler
that defends against bad input.

How will you do coverage testing of the handler, if in fact, as one might
hope, there aren't any bad calls.

Not only compiler generated checks are the issue here. Suppose you write:

   if x > 10 and y > 10 then

	-- Don't know if this can ever happen, but just in case ...

	y := x - 10; -- readjust to acceptable range
   end if;

such code might be appropriate defensive programming in some situations,
but might cause trouble if certification requires coverage testing. There
are three possibilities in a case like this:

  1. You can find a case where the condition is triggered. Fine trigger it
     for the coverage test.

  2. You can prove it never happens. Fine, remove the check

  3. You can't prove it never happens, all your attempts to find the case
     fail, but you can't complete an actual proof.

Case 3 is the tough one, and shows why coverage testing can sometimes
be a double edged sword.

And I have definitely seen situations in which checking has been disabled
precisely because any code conditioned on the checks is impossible or
difficult to coverage testing.

I am not particuarly defending this state of affairs, or the absolute
requirement for coverage testing, just pointing out the situation :-)




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
       [not found]             ` <3gbr4f$p4b@theopolis.orl.mmc.com>
@ 1995-01-29 13:00               ` Robert Dewar
  1995-01-30 19:21                 ` Garlington KE
  0 siblings, 1 reply; 18+ messages in thread
From: Robert Dewar @ 1995-01-29 13:00 UTC (permalink / raw)


>dewar@cs.nyu.edu (Robert Dewar) wrote:
>>
>> Mike, you miss the point that in some environments it is REQUIRED to turn
>> off runtime checking.
>>
>> Why, because runtime checking can create code that cannot be executed, and
>> in some verification environments coverage testing is required, so you cannot
>> have code and logic paths that cannot be executed.
>>
>
>Whenever we had such requirements, they were for the SOURCE code, not
>for the EXECUTABLE code! Are you telling me you actually dissasembled
>your executables and checked to make sure there was no dead code?
>Unless your compiler was REAL good at optimization, your executables
>were probably FULL of dead code by this definition.

Absolutely, I am talking about coverage testing on the generated object code,
which ensures that every reachable instruction has been executed by at least
one test program, and, in a stronger form, that every logic branch has been
executed both ways, and all branches of a case have been taken etc.

This seems normal to me in a safety critical environment (people aren't very
trusting of their compilers in such environments, or of anything else, and
that's the way you would hope it would be).

Dead code is a problem. You can take one of two approaches (I have seen both
done). First, you can just require that there be no dead code (code that has
no branch path to it should be trivially eliminated even by a pretty weak
optimizer, so I don't quite know what T.E.D is referring to when he says
that executables would be full of dead code. This is the approach that I
have most often seen used. The second approach concentrates on making sure
that all logic paths are tested, and then you don't care about dead code,
but I am afraid that in Ada you DO have to consider failing a check and
signalling an exception as a logic path, and making sure that all logic
paths are taken means making sure all exception handlers are executed.


Robert




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Run-time checking and speed
  1995-01-29 13:00               ` Robert Dewar
@ 1995-01-30 19:21                 ` Garlington KE
  0 siblings, 0 replies; 18+ messages in thread
From: Garlington KE @ 1995-01-30 19:21 UTC (permalink / raw)


: >Whenever we had such requirements, they were for the SOURCE code, not
: >for the EXECUTABLE code! Are you telling me you actually dissasembled
: >your executables and checked to make sure there was no dead code?
: >Unless your compiler was REAL good at optimization, your executables
: >were probably FULL of dead code by this definition.

Two other points about "disassembling the executable":

1. There are now capabilities described in the Ada 95 Safety and Security annex to assist in checking the object code for dead code.

2. Appearently, our compiler is real good at optimization, because dead code
is pretty rare. I never thought of our compiler as being THAT good, but maybe
my expectations are too high :)

: Dead code is a problem. You can take one of two approaches (I have seen both
: done). First, you can just require that there be no dead code (code that has
: no branch path to it should be trivially eliminated even by a pretty weak
: optimizer, so I don't quite know what T.E.D is referring to when he says
: that executables would be full of dead code. This is the approach that I
: have most often seen used. The second approach concentrates on making sure
: that all logic paths are tested, and then you don't care about dead code,
: but I am afraid that in Ada you DO have to consider failing a check and
: signalling an exception as a logic path, and making sure that all logic
: paths are taken means making sure all exception handlers are executed.

Actually, we take a third approach: We do care about the dead code, but we
do allow it in a few cases (if our analysis of the reason for the dead code
shows that it isn't due to some error in the algorithm, etc.).

--------------------------------------------------------------------
Ken Garlington                  GarlingtonKE@lfwc.lockheed.com
F-22 Computer Resources         Lockheed Fort Worth Co.

If LFWC or the F-22 program has any opinions, they aren't telling me.



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~1995-01-30 19:21 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1995-01-10 22:20 Run-time checking and speed Tony Leavitt
1995-01-12  1:14 ` Roger Labbe
1995-01-13 12:09   ` Philip Brashear
     [not found] ` <3f0prq$3bq@theopolis.orl.mmc.com>
1995-01-12 14:13   ` Robert Dewar
1995-01-13  1:49     ` Doug Smith
1995-01-13 15:29       ` Norman H. Cohen
1995-01-13 15:21     ` Norman H. Cohen
     [not found]     ` <3fa2pk$kbi@felix.seas.gwu.edu>
     [not found]       ` <EACHUS.95Jan17151835@spectre.mitre.org>
     [not found]         ` <3fjhrj$9b3@oahu.cs.ucla.edu>
1995-01-20  5:11           ` Robert Dewar
1995-01-23 16:43             ` Mats Weber
1995-01-24 19:25               ` Robert Dewar
1995-01-22 18:43         ` Michael Feldman
1995-01-23 23:38           ` Robert Dewar
1995-01-26 16:14             ` Kent Mitchell
1995-01-28  6:03               ` Robert Dewar
     [not found]             ` <3gbr4f$p4b@theopolis.orl.mmc.com>
1995-01-29 13:00               ` Robert Dewar
1995-01-30 19:21                 ` Garlington KE
1995-01-12 15:11 ` Norman H. Cohen
  -- strict thread matches above, loose matches on Subject: below --
1995-01-12 15:54 Keith Arthurs

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox