comp.lang.ada
 help / color / mirror / Atom feed
* Need help with PowerPC/Ada and realtime tasking
@ 1996-05-17  0:00 Dave Struble
  1996-05-18  0:00 ` JP Thornley
  0 siblings, 1 reply; 39+ messages in thread
From: Dave Struble @ 1996-05-17  0:00 UTC (permalink / raw)



I am an experienced real time system designer who is just learning
how to use Ada.  I am also just learning how to use the Power PC
(my target system).  I am seeking help to analyze some alternative 
ways to implement a hard real time application in Ada on the Power 
PC.  Actually, on a multiprocessor configuration.  
The intent is to design the software in such a way that dependence 
on the number of processors is minimized.

This is very "hard" real time -- life critical application and 
interrupts occurring at a frequency roughly 1/100 the speed of
the memory during peak situations (although most of the
time it is somewhat less frequently).

I seek two kinds of information:
I. - generic information on implementing hard real time applications
  in Ada - applications where the Ada tasking model cannot be used
  because:
  - the implementation of tasking is too slow
  - only a skeleton run time is in memory, and it does not support
     all Ada capabilities

II.- specific information pertaining to Ada on the Power PC.
Specifically, the Motorola 603e implementation.

The kinds of things I seek are:
- a textbook on real time applications in Ada
- a white paper or technical paper 
- an on-line document of some kind
- a series of hints or whatever from an expert

The kinds of information/questions I have are:

1) What sorts of approaches do people use for tasking when
   the Ada tasking model is too slow?  How machine independent
   can these be?
2) What approaches can be used for sharing memory between
  processes that may or may not be located in the same processor
  (but the processors do have some shared memory and some
  local memory)
3) What techniques are commonly used to synchronize?
4) What kinds of features do off-the-shelf run time systems
  offer to support these kinds of applications and what are
  the advantages and drawbacks of different systems/approaches?
5) What other questions should I be asking and what are their
  answers?
6) What pitfalls are likely?

My objective is threefold:
- to help design the software
- to select a run time system from among several candidates
- to help design the interrupt system (hardware and software)

Thanks for any help.
---------------------------------------------------------------------
-  Dave Struble                         |                           -
-  Texas Instruments Incorporated       |  "Oh God, tell me who to  -
-  Defense Systems & Electronics Group  |   smite and they shall    -
-  Dallas, TX                           |   be smooten!"            -
-  e-mail: d-struble@ti.com             |                           -
-  Voice:  (214) 575-5346               |        - Homer Simpson    -
-  FAX:    (214) 575-6200               |                           -
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-17  0:00 Need help with PowerPC/Ada and realtime tasking Dave Struble
@ 1996-05-18  0:00 ` JP Thornley
  1996-05-20  0:00   ` Robert I. Eachus
                     ` (5 more replies)
  0 siblings, 6 replies; 39+ messages in thread
From: JP Thornley @ 1996-05-18  0:00 UTC (permalink / raw)



In article: <d-struble.107.001A95C3@ti.com>  d-struble@ti.com (Dave 
Struble) writes:
> 
> I am an experienced real time system designer who is just learning
> how to use Ada.  I am also just learning how to use the Power PC
> (my target system).
	[snip]
> The intent is to design the software in such a way that dependence 
> on the number of processors is minimized.
> 
> This is very "hard" real time -- life critical application and 
> interrupts occurring at a frequency roughly 1/100 the speed of
> the memory during peak situations (although most of the
> time it is somewhat less frequently).
> 

My first response is that safety-critical software does not go well
with interrupts and the use of tasking.  The main requirement of 
safety-critical code is predictability, which is made impossible if
you are coping with unpredictable interrupts and with hard-to-analyse
tasking syncronisations.

> I seek two kinds of information:
> I. - generic information on implementing hard real time applications
>   in Ada - applications where the Ada tasking model cannot be used
>   because: [snip]

I wouldn't expect to see anything other than a cyclic executive in 
safety-critical software.

> The kinds of things I seek are:
> - a textbook on real time applications in Ada

Have a look at Concurrency in Ada - Alan Burns, Andy Wellings,
Cambridge University Press 1995, ISBN 0 521 41471 7, hardback

> - a white paper or technical paper 
> - an on-line document of some kind
> - a series of hints or whatever from an expert

Both of the authors above are at the University of York (UK) so
you might find something on the University web site - I don't have
a web address, but try an address based on york.ac.uk

> 
> The kinds of information/questions I have are:
> 
> 1) What sorts of approaches do people use for tasking when
>    the Ada tasking model is too slow?  How machine independent
>    can these be?

In safety-critical systems, predictability of operation comes first,
second and third - factors such as speed and platform independence
come a long way behind - in fact I don't see any way that you can
qualify the system without defining the hardware it will be running 
on as software cannot be qualified on its own.

> 2) What approaches can be used for sharing memory between
>   processes that may or may not be located in the same processor
>   (but the processors do have some shared memory and some
>   local memory)

Ada provides mechanisms for data structures to be mapped to specific
memory addresses.

> 3) What techniques are commonly used to synchronize?
> 4) What kinds of features do off-the-shelf run time systems
>   offer to support these kinds of applications and what are
>   the advantages and drawbacks of different systems/approaches?

Look at the CSMART run-time from Thompson (used for some of the 
safety-critical code on the Boeing 777 - does not include any 
tasking) and ask about the future availability of a run-time 
with restricted tasking features.

Otherwise you will have to produce your own run-time as none of the 
others are (AFAIK) suitable for safety-critical systems.

> 5) What other questions should I be asking and what are their
>   answers?
> 6) What pitfalls are likely?

It is not clear what (if any) authority will be responsible for 
certifying your system, but you should look for some relevant 
software development standards for safety-critical code.  In the
civil field the usual one is DO178B (I don't have a reference to 
hand but can get it if you want).

Hope this helps,

Phil Thornley

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
@ 1996-05-20  0:00   ` Robert I. Eachus
  1996-05-21  0:00     ` Michael Levasseur
  1996-05-21  0:00   ` Richard Riehle
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Robert I. Eachus @ 1996-05-20  0:00 UTC (permalink / raw)



In article <1026696wnr@diphi.demon.co.uk> JP Thornley <jpt@diphi.demon.co.uk> writes:

  > My first response is that safety-critical software does not go well
  > with interrupts and the use of tasking.  The main requirement of 
  > safety-critical code is predictability, which is made impossible if
  > you are coping with unpredictable interrupts and with hard-to-analyse
  > tasking syncronisations.

  Your second sentence is correct, but the first doesn't follow...  If
you want to use tasking in a safety-critical system look at the work
on hard deadline scheduling by Liu and Goodenough at the SEI.  It
works, it can be a pain to work through unless you have good tools,
but it works reliably.

   If there are only a few tasks, and the timing is not too tight,  a
cyclic executive is a reasonable alternative.


--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-20  0:00   ` Robert I. Eachus
@ 1996-05-21  0:00     ` Michael Levasseur
  0 siblings, 0 replies; 39+ messages in thread
From: Michael Levasseur @ 1996-05-21  0:00 UTC (permalink / raw)



A few years back I worked on the Sea Wolf program that had
very Hard Real-time requirements, required numerous interrupts
from different devices.

To make the system guarentee processing of interrupts within a
certain amount of time we had to do Rate Monotonic Analysis to
guarentee the maximum delay in scheduling of interrupt servicing
and the maximum processing times.

The tasking must be done with great care. You must make sure
that what is performed in the task is only what needs to be,
you must also understand how the Run-time system handles task.
Yuo also need to have the minumum number of tasks that are truelly
needed.


Just my two cents worth.....





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
  1996-05-20  0:00   ` Robert I. Eachus
@ 1996-05-21  0:00   ` Richard Riehle
  1996-05-25  0:00     ` JP Thornley
  1996-05-25  0:00     ` Need help with PowerPC/Ada and realtime tasking JP Thornley
  1996-05-28  0:00   ` Robert I. Eachus
                     ` (3 subsequent siblings)
  5 siblings, 2 replies; 39+ messages in thread
From: Richard Riehle @ 1996-05-21  0:00 UTC (permalink / raw)
  To: JP Thornley


On Sat, 18 May 1996, JP Thornley wrote:

> > This is very "hard" real time -- life critical application and
> > interrupts occurring at a frequency roughly 1/100 the speed of
> > the memory during peak situations (although most of the
> > time it is somewhat less frequently).
> >
>
> My first response is that safety-critical software does not go well
> with interrupts and the use of tasking.  The main requirement of
> safety-critical code is predictability, which is made impossible if
> you are coping with unpredictable interrupts and with hard-to-analyse
> tasking syncronisations.

  I would like to offer a slightly different view of this.

  The main requirement of safety-critical code is that it be "safe."
  The suggestion that a cyclic executive is safer than tasking is not
  necessarily true.  While predictability is essential, the important
  elemement of predictability is that the system is schedulable such
  that it can service all of its required tasks.

  A cyclic executive does not guarantee schedulability.  It does not even
  guarantee predictability.  In fact, a cyclic executive can guarantee
  that one actually fails to trap events as they occur.  An interrupt
  intelligent design can detect an interrupt and choose to ignore it,
  store it for later processing, etc.

  Using the Ada 95 tasking model, along with protected types, along with
  a healthy dose of rate-monotonic analysis, one can design an interrupt
  driven application that is predictable.

  While it is is true that the Ada 83 tasking model was somewhat frail
  in its range of options for effective interrupt-driven applications,
  especially in being able to guarantee schedulability, Ada 95 is a
  more robust model.

  The underlying problem, as usual, is designing an architecture that
  gurantees predictable schedulability.

  An additional problem with the cyclic executive model is ist lack of
  portability.  Since it is dependent on the timings of the platform to
  which it is targeted, it can fail to meet its goals when ported to a
  new platform.  With tasking, one is more dependent on the hardware-
  independent logic represented by the interaction of the tasks.  Again,
  the Ada 95 tasking model makes this fairly clean.

  Please do not think I am underestimating the difficulty of designing
  an architecture based on interrupt-driven protocol.  I am simply
  suggesting that Ada tasks should not be ruled out based on experience
  with the earlier Ada 83 design.  Also, I want to recommend a little
  caution in being too quick to believe that cyclic executives are any
  easier when one must depend on trapping interrupts in a timely
  fashion.

> I wouldn't expect to see anything other than a cyclic executive in
> safety-critical software.

  My note, above reflects my view on this.

> Have a look at Concurrency in Ada - Alan Burns, Andy Wellings,
> Cambridge University Press 1995, ISBN 0 521 41471 7, hardback

  I agree.  Except for a few minor syntactic errors, this is an
  excellent book.  Great exposition of protected types, and their
  role in object techonology.

> > The kinds of information/questions I have are:
> >
> > 1) What sorts of approaches do people use for tasking when
> >    the Ada tasking model is too slow?  How machine independent
> >    can these be?

  Tasking can be designed to be machine independent. As far as too
  slow, that is an issue which varies from compiler to compiler. It
  is also an issue that often originates in poorly design tasking
  architectures, misunderstaning of the rendezvous, and overall
  incorrect design.  For example, in an attempt to reduce the effect
  of tasking on a design, the software engineer sometimes uses too
  few tasks rather than too many.  In this case, each task is given
  too much work, and the system fails to meet its design goals.

  Ideally, the tasking environment is designed so each task spends most
  of its life waiting for something to do.  That way, it is always
  ready to do what it was assigned to do.  When I hear people say that
  tasking is too slow, and blame it on the compiler, or the RTE, I
  am inclined to ask for a closer look at the design.

  Don't underestimate the power of tasking.

> In safety-critical systems, predictability of operation comes first,
> second and third - factors such as speed and platform independence
> come a long way behind - in fact I don't see any way that you can
> qualify the system without defining the hardware it will be running
> on as software cannot be qualified on its own.

  Actually, this is one of the benefits of using tasking to define
  the architecture.  Remember that I speak here of Ada 95 tasking.

> > 2) What approaches can be used for sharing memory between
> >   processes that may or may not be located in the same processor
> >   (but the processors do have some shared memory and some
> >   local memory)

  An excellent example of this is the Silicon Graphics workstation. The
  way SGI has implemented tasking for sharing memory, processes, threads,
  etc. establishes that tasking is certainly useful for environments where
  performance and predictability are important.

  Richard Riehle





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-21  0:00   ` Richard Riehle
  1996-05-25  0:00     ` JP Thornley
@ 1996-05-25  0:00     ` JP Thornley
  1996-05-27  0:00       ` Robert Dewar
  1 sibling, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-05-25  0:00 UTC (permalink / raw)



Richard Riehle <rriehle@nunic.nu.edu> writes, in a follow-up on 
safety-critical software using interrupts and tasking:-

>   The main requirement of safety-critical code is that it be "safe."

My view is that code can never be judged as safe or unsafe - only 
correct or incorrect.  However my usage of the words "safe" - and 
"safety-critical" carries a lot of additional baggage, and it is 
possible that we are differing over the meaning of these words rather 
than anything fundamental.

So here are my meanings (this could get quite lengthy and it's rather 
off-topic for cla, so bail out now if not really interested).

Software *on its own* is incapable of causing harm.  For this to occur, 
it must be part of a larger system that translates the outputs of the 
software into actions in the real world - eg moving actuators or 
displaying information.  So safety is an attribute of a system.

In assessing the safety of a system, the process starts with hazard 
identification.  A hazard is an event that has a reasonable chance of 
resulting in a serious outcome (eg death or serious injury to a person, 
major financial loss or widespread environmental damage).
For example - a traffic light controlled road junction is a system; a 
hazard (possibly the only one) could be 'collision between vehicles 
using the junction'.
[Note - other people use 'hazard' with a different meaning; here I'm 
giving the meaning I use and I'm *not* arguing that it's the only 
correct meaning.]

Hazard analysis then identifies the mechanisms that could give rise to 
the hazard.  For example:-
1. 'vehicle crosses junction when lights are on red' or 
2. 'lights indicate green in conflicting directions'

The first of these could be further analysed as:-
1a. 'driver ignores red light'
1b. 'weather conditions make light difficult to see'
1c. 'failure of the vehicle's braking mechanism'
etc.

This process continues until specific failures of individual components 
of the system have been identified.

[Time for more caveats - system safety isn't really my area, also this 
is only one of a number of different ways of doing hazard analysis - 
it's still very much a developing technology (see "Safeware" by Nancy 
Levenson)].

Based upon this analysis each component of the system can be given a 
required integrity rating.  In many cases, failure of a single component 
does not lead to the hazard unless there is an independent failure of 
one or more other components - so the required integrity level of each 
component can be reduced.  A _safety-critical_ rating is given to any 
component where a failure can lead to the hazard without the need for 
any independent failure occurring.

Clearly any safety-critical component must have a very low failure rate 
as the overall failure rate for the system cannot be less than the sum 
of the failure rates for the safety-critical components.

Following this process, and the prediction of failure rates for the 
components, the system can be judged as _safe_ or unsafe on a calculated 
probability of the hazard occurring.  It is often measured as the rate 
of the hazard occuring over a defined period of operation - typical 
figures might be 10^-6 to 10^-9 per hour depending on the perceived 
severity of the hazard, rates of exposure, etc.

So why do I say that software cannot be considered safe?

There are no meaningful failure modes for a software component, since a 
software failure can rarely be contained to only part of that component 
- it either works without failure or fails completely.  The effects of a 
software failure are assumed to be whatever are the worst possible in 
the situation that is currently under analysis.

Given that we cannot measure software to the rates quoted above, any 
software component rated as safety-critical has to be given a failure 
rate of zero in the system safety assessment.  (This places quite severe 
requirements on the software development team and their process ;-).

So safety is measured by (usually) small but definitely non-zero 
numbers; software is either correct or not, with no numeric scale.

Sorry to take so long to get there, but I thought it worthwhile trying 
to get my meanings as clear as possible.

Phil Thornley

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-21  0:00   ` Richard Riehle
@ 1996-05-25  0:00     ` JP Thornley
  1996-05-27  0:00       ` Darren C Davenport
  1996-05-28  0:00       ` Tasking in safety-critical software (!) (was Re: Need help with PowerPC/Ada and realtime tasking) Kevin F. Quinn
  1996-05-25  0:00     ` Need help with PowerPC/Ada and realtime tasking JP Thornley
  1 sibling, 2 replies; 39+ messages in thread
From: JP Thornley @ 1996-05-25  0:00 UTC (permalink / raw)



Richard Riehle <rriehle@nunic.nu.edu> writes:
> 
> On Sat, 18 May 1996, JP Thornley wrote:

  [in response to a question about a "life critical application" which
   has both interrupts and tasking]

> > My first response is that safety-critical software does not go well
> > with interrupts and the use of tasking.  The main requirement of
> > safety-critical code is predictability, which is made impossible if
> > you are coping with unpredictable interrupts and with 
hard-to-analyse
> > tasking syncronisations.
> 
>   I would like to offer a slightly different view of this.
> 
    [well argued case for tasking mostly snipped]

Perhaps I am guilty of using the predictability argument to stand in for 
all the reasons for not using tasking in safety-critical code.  So I'll 
describe some more here.

Several studies into safety-critical subsets have all rejected tasking:-
	Safe Ada
	SPARK Ada
	High Integrity Ada Study (YSE/BAe)
	CSMART Ada
so there is going to be a major credibility problem convincing a 
qualification authority to go along with tasking.

Allow also for the relevant personnel from that authority to have an 
unknown level of mathematical and computer system literacy.  
(Furthermore, arguments which talk about high priority tasks being 
blocked by low priority tasks can be expected to bring on a severe fit 
of the vapours.)

Tasks in the application require tasking run-time support.  This will 
therefore need to be qualified to safety-critical standards (ie 
reasonable expectation of zero failures, see other post in this thread).
Since I've never done it, I don't know what it would take in terms of 
effort, but it can't be anything other that a very major undertaking.

I would guess that most of the tasking part of the run-time will be 
written in Ada - so will be required to either conform to the 
safety-critical subset in use or be re-written to that sub-set.  Common 
restrictions include no access types and no heap usage - is this likely 
to be a problem?  [One problem of working with small subsets is that you 
end up knowing nothing about the rest of the language, it's about six 
years since I last saw an Ada task anywhere other than a text-book or 
journal.]

>   A cyclic executive does not guarantee schedulability.  It does not 
even
>   guarantee predictability.  In fact, a cyclic executive can guarantee
>   that one actually fails to trap events as they occur.

But safety-critical systems I'm talking about have *no* interrupts (see 
my original response - perhaps not sufficiently emphasised) so this 
argument does not apply.  (Actually there is one - the timer that drives 
the minor cycle - and this has to be explicitly justified in the safety 
case.)

>   An additional problem with the cyclic executive model is ist lack of
>   portability.  Since it is dependent on the timings of the platform 
to
>   which it is targeted, it can fail to meet its goals when ported to a
>   new platform.

Agreed - any hardware or software change that affects the timing 
analysis means that you have to do it all again, and you may have to 
redesign the schedule.  (But if it comes to a trade-off between safety 
and portability there's not much doubt as to which way I'd go.)

There is a related issue that doesn't come up very often when discussing 
scheduling strategies, which is the accuracy of the worst-case execution 
times used in the analysis.  Deriving these figures requires a major 
effort, with substantial error bounds on the resulting timings.  I 
believe that the figures that I currently use are typically in the range 
10%-30% pessimistic and I wouldn't be happy to use figures with a lower 
margin of error unless I can believe that their accuracy is improved.

I can't get excited about more elaborate scheduling strategies to sqeeze 
another 5% out of the processor with safety margins like this (I'd 
sooner put it into more accurate timing figures).

>   ....  Remember that I speak here of Ada 95 tasking.

I was beginning to wonder whether I was the only reader of cla still 
using Ada 83 until there were some recent posts from others in the same 
situation in another thread.  To put my situation more clearly, there is 
one safety-critical system going into system design later this year, 
first trails to be run in 1998 and delivery to the customer in 2000 
onwards - this system will use Ada 83 as the Ada 95 compilers won't be 
usable in that timescale.
[I feel yet another post coming on, about compiler validation for 
safety-critical code, but it's time to go and cut the grass before it 
rains again.]

>   Richard Riehle
> 

Phil Thornley
-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-25  0:00     ` Need help with PowerPC/Ada and realtime tasking JP Thornley
@ 1996-05-27  0:00       ` Robert Dewar
  1996-05-28  0:00         ` JP Thornley
  0 siblings, 1 reply; 39+ messages in thread
From: Robert Dewar @ 1996-05-27  0:00 UTC (permalink / raw)



JP Thornley said

"My view is that code can never be judged as safe or unsafe - only
correct or incorrect.  However my usage of the words "safe" - and
"safety-critical" carries a lot of additional baggage, and it is
possible that we are differing over the meaning of these words rather
than anything fundamental.
"

I think that is completely wrong. Correctness, i.e. formal conformance
between the implementation and the specification, is neither necessary
nor sufficient for safety.

It is not necessary, because there can be deviations that are not
life-critical, e.g. if the horizon display on the pilots console
is not the specified shade of green, it is not critical.

It is not sufficient, because the formal specification may be incomplete
or incorrect.






^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-25  0:00     ` JP Thornley
@ 1996-05-27  0:00       ` Darren C Davenport
  1996-05-30  0:00         ` Ralph E. Crafts
  1996-05-28  0:00       ` Tasking in safety-critical software (!) (was Re: Need help with PowerPC/Ada and realtime tasking) Kevin F. Quinn
  1 sibling, 1 reply; 39+ messages in thread
From: Darren C Davenport @ 1996-05-27  0:00 UTC (permalink / raw)



JP Thornley <jpt@diphi.demon.co.uk> wrote:
>I was beginning to wonder whether I was the only reader of cla still 
>using Ada 83 until there were some recent posts from others in the same 
>situation in another thread.  To put my situation more clearly, there is 
>one safety-critical system going into system design later this year, 
>first trails to be run in 1998 and delivery to the customer in 2000 
>onwards - this system will use Ada 83 as the Ada 95 compilers won't be 
>usable in that timescale.
>
>Phil Thornley


Doesn't OC systems sell an Ada95 compiler for the PowerPC that's been
validated?
Seems like you ought to check out what the vendors provide before saying
that there won't be any Ada95 compilers in that timescale.

Darren





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Tasking in safety-critical software (!) (was Re: Need help with PowerPC/Ada and realtime tasking)
  1996-05-25  0:00     ` JP Thornley
  1996-05-27  0:00       ` Darren C Davenport
@ 1996-05-28  0:00       ` Kevin F. Quinn
  1 sibling, 0 replies; 39+ messages in thread
From: Kevin F. Quinn @ 1996-05-28  0:00 UTC (permalink / raw)



JP Thornley wrote:
> 
> Richard Riehle <rriehle@nunic.nu.edu> writes:
>     [well argued case for tasking mostly snipped]

> Several studies into safety-critical subsets have all rejected tasking:-
>         Safe Ada
>         SPARK Ada
>         High Integrity Ada Study (YSE/BAe)
>         CSMART Ada
> so there is going to be a major credibility problem convincing a
> qualification authority to go along with tasking.
> 
> Allow also for the relevant personnel from that authority to have an
> unknown level of mathematical and computer system literacy.

It's worth noting at this point, that IME even the majority of software
engineers have difficulty with the complexity of proving things about
systems using tasks, in anything but the most trivial of cases.  Especially
given that there's little or no tool support.  And the reason there's no
tool support (i.e. the reason why tasks are excluded from those subsets) is
that it's mathematically complex, in the extreme, to perform automated proofs.
Many language features are excluded from those subsets largely on grounds
of being difficult to analyse, rather than any inherent danger from dynamic
behaviours.


> >   A cyclic executive does not guarantee schedulability.  It does not
> even
> >   guarantee predictability.  In fact, a cyclic executive can guarantee
> >   that one actually fails to trap events as they occur.
> 
> But safety-critical systems I'm talking about have *no* interrupts (see
> my original response - perhaps not sufficiently emphasised) so this
> argument does not apply.  (Actually there is one - the timer that drives
> the minor cycle - and this has to be explicitly justified in the safety
> case.)

Agreed.  The cyclic executive _can_ be used to guarantee schedule periods.
In a real-time, safety critical system, a major part of the various analyses
performed are to prove termination of a subprogram, and in the case of real
time applications to prove upper limits on execution times.  This enables
worst-case schedule times to be proven to a high level of confidence.  Of
course, essentially the cyclic executive approach takes control away fro
the "black box" tasking system, and places control explicitly throughout
the application, which helps one to "know" what's happening at various points.

> >   An additional problem with the cyclic executive model is ist lack of
> >   portability.  Since it is dependent on the timings of the platform
> to
> >   which it is targeted, it can fail to meet its goals when ported to a
> >   new platform.
> 
> Agreed - any hardware or software change that affects the timing
> analysis means that you have to do it all again, and you may have to
> redesign the schedule.  (But if it comes to a trade-off between safety
> and portability there's not much doubt as to which way I'd go.)

Portability is largely a non-issue, w.r.t. safety critical software, IME.
As you say, any change in hardware necessitates large volumes of analysis
to be performed again.  Another point is that when hardware changes
substantially (e.g. processor "upgrade", possibly the only real case where
portability would be useful) then the opportunity is often taken to implement
additional requirements that mean large-scale modifications to the software
anyway.

> >   ....  Remember that I speak here of Ada 95 tasking.
> 
> I was beginning to wonder whether I was the only reader of cla still
> using Ada 83 until there were some recent posts from others in the same
> situation in another thread.

Reading this group does, IMO, give a false impression as to the market
penetration of Ada95, certainly in the defence industry.  Large sections
of the industry are still only vaguely aware that it even exists as a
standard, let alone that GNAT is now an almost complete implementation.
Certainly most people I work with (and have worked with), who work with
Ada every day, know little or nothing about GNAT.  Another issue is that
the sort of projects we're talking about have long development periods,
of the order of many years, and of course Ada95 compilers don't have
anything like the maturity of existing Ada83 compilers.

Having said all that, it'll be interesting to see whether GNAT gets used
in very safety-critical systems.  All the source-code analysis in the world
won't help in the slightest if the compiler isn't correct - and with commercial
compilers the best you can expect to get is some kind of statement from
the vendor as to the correctness of the compiler.  Ada95 does have the extra
pragmas to assist in object-code verification, but with GNAT, you have
the complete source code to the compiler - something commercial compiler
writers are unlikely to release, so there are possibilities for actually
including the compiler itself into the proofs.  Not that I'd relish the
task of proving much about the compiler, of course :)

Hmm; that's quite enough for now...

Kev.
Speaking for (and no doubt to ;) ) himself.




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
  1996-05-20  0:00   ` Robert I. Eachus
  1996-05-21  0:00   ` Richard Riehle
@ 1996-05-28  0:00   ` Robert I. Eachus
  1996-05-30  0:00     ` JP Thornley
  1996-05-28  0:00   ` Robert I. Eachus
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Robert I. Eachus @ 1996-05-28  0:00 UTC (permalink / raw)



In article <122916091wnr@diphi.demon.co.uk> JP Thornley <jpt@diphi.demon.co.uk> writes:

  > Several studies into safety-critical subsets have all rejected tasking:-
  >	   Safe Ada
  >	   SPARK Ada
  >	   High Integrity Ada Study (YSE/BAe)
  >	   CSMART Ada
  > so there is going to be a major credibility problem convincing a 
  > qualification authority to go along with tasking.

    I won't go too deep into the gory details, but there is Ada
tasking and Ada tasking.  There are some features which anyone writing
or approving safety-critical applications should reject, and there are
some features that most real-time systems need.  The problem is where
to draw the line.

    A simple example of something that should be verboten is dynamic
creation of tasks.  If you can't tell before the program runs how many
tasks are needed and how much memory they will use, there is no way to
certify the software.  If you can tell,the most sensible thing to do
is to create those tasks at the start, and leave out heap and memory
management.  Now look at RM95 D.7.  It spells out arguments to praga
Restrictions which can be used to ensure such an environment:
Max_Tasks, No_Task_Hierarchy, and No_Task_Allocators, and
No_Implicit_Heap_Allocations.

  > Tasks in the application require tasking run-time support.  This
  > will therefore need to be qualified to safety-critical standards
  > (ie reasonable expectation of zero failures, see other post in
  > this thread).  Since I've never done it, I don't know what it
  > would take in terms of effort, but it can't be anything other that
  > a very major undertaking.

  A restricted run-time (with the above restrictions plus, among others,
No_Abort_Statements and No_Terminate_Alternatives) should be small and
easy to certify.  The parts of tasking that make the run-time complex
have no place in real-time or safety-critical code.

  On the other hand, they are necessary when creating robust large
complex information systems, and sometimes you need a mix of both.
You know that awful feeling on a PC or wherever, when the screen stops
responding to your inputs due to an error in the input handler?  A
"dead man" timer can abort that task and replace it.  Trivial with a
good Ada implementation, and the user may never notice.  But three
seconds--even three milli-seconds of ignored input has no part in any
high-priority task in real-time system I have ever worked with, but
such things--usually with much shorter fuses--can be useful for lower
priorty tasks.

  > I would guess that most of the tasking part of the run-time will
  > be written in Ada - so will be required to either conform to the
  > safety-critical subset in use or be re-written to that sub-set.
  > Common restrictions include no access types and no heap usage - is
  > this likely to be a problem?  [One problem of working with small
  > subsets is that you end up knowing nothing about the rest of the
  > language, it's about six years since I last saw an Ada task
  > anywhere other than a text-book or journal.]
  
  See the above, not only no problem, but expected in many environments.

  > There is a related issue that doesn't come up very often when discussing 
  > scheduling strategies, which is the accuracy of the worst-case execution 
  > times used in the analysis.  Deriving these figures requires a major 
  > effort, with substantial error bounds on the resulting timings.  I 
  > believe that the figures that I currently use are typically in the range 
  > 10%-30% pessimistic and I wouldn't be happy to use figures with a lower 
  > margin of error unless I can believe that their accuracy is improved.

  Uh, a couple of points here.  There are fudge factors and fudge
factors.  I've worked with code where the key timings were known to
the accuracy of the clock, and ate up over 95% of the time available.
Better have very predictable processors for that.  (And be near
end-of-life for the system.  The one thing we know more than anything
is that upgrades very seldom run faster.)  On some of the same
applications, the low priority task timings are known, and knowable
with much less accuracy.  You either live with generous margins, or
design the system so that overruns in the low priority tasks cannot
affect the higher priority tasks.  Hard to do in cyclic executives
(but I have seen it done), much easier with rate monotonic.

  Second, even cyclic executives need some slop for schedulability,
often one third or more of the cycles.  (You schedule a cyclic
executive based on worst-case timings in all tasks, even if two events
are mutually exclusive--or you HOPE they are--such as landing and
weapons release.)  With RMA you can often do much better, but the more
important thing is to be able to prove you haven't done worse.

  > I can't get excited about more elaborate scheduling strategies to
  > sqeeze another 5% out of the processor with safety margins like
  > this (I'd sooner put it into more accurate timing figures).

  Exactly.  But I have seen cyclic executives with 5% duty cycles
where pushing them any harder means failing scheduling.  I'd much
rather approve a system with a 40% load and RMA "guaranteeing" schedulability
at twice that, than a cyclic with a 30% load and crossed fingers.

  > I was beginning to wonder whether I was the only reader of cla still 
  > using Ada 83 until there were some recent posts from others in the same 
  > situation in another thread.  To put my situation more clearly, there is 
  > one safety-critical system going into system design later this year, 
  > first trails to be run in 1998 and delivery to the customer in 2000 
  > onwards - this system will use Ada 83 as the Ada 95 compilers won't be 
  > usable in that timescale.

   For such a system, switching to Ada 95 may make sense.  If you are
not yet into coding, and with no design changes, you should see some
reduction in code costs.  But you could be taking a risk if you
require ACVC 2.1 validation.   A small risk, since there should be
several 2.1 validated compilers next year, but I guess it depends on
when you need the validated version by.  If this is not a DoD system,
then other validations may matter more.

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
                     ` (2 preceding siblings ...)
  1996-05-28  0:00   ` Robert I. Eachus
@ 1996-05-28  0:00   ` Robert I. Eachus
  1996-05-30  0:00     ` JP Thornley
  1996-05-31  0:00   ` Robert I. Eachus
  1996-06-03  0:00   ` Ralph Paul
  5 siblings, 1 reply; 39+ messages in thread
From: Robert I. Eachus @ 1996-05-28  0:00 UTC (permalink / raw)



In article <355912560wnr@diphi.demon.co.uk> JP Thornley <jpt@diphi.demon.co.uk> writes:

  > My view is that code can never be judged as safe or unsafe - only
  > correct or incorrect.  However my usage of the words "safe" - and
  > "safety-critical" carries a lot of additional baggage, and it is
  > possible that we are differing over the meaning of these words
  > rather than anything fundamental...

  > So safety is measured by (usually) small but definitely non-zero 
  > numbers; software is either correct or not, with no numeric scale.

   You must work with a different type of software than I do.  With a
restricted input set, such analysis can be right.

   But I (and a lot of other people) find that some system
requirements delegated to the software do have percentages or failure
tolerances attached.  For example, in a target rich environment, what
is the probability that two detections of a target from a pulse
doppler radar will not be correlated?  There are failure rates for the
sensors, some tied to the geometry of the radar or the signatures of
the target, and some failures due to the algorithms (and software)
used. 

   This is not just a military issue.  Many years ago, there were
several planes that crashed using radar altimeters.  It turned out
that under certain circumstances the bottom return off of a lake was
read instead of the surface reflection.  (Usually at night, when a
thin skin of ice resulted in very smooth surfaces which was read at
the wrong angle for specular reflections.)  Oops!  Pilots would adjust
their barometric barometers to match, then crash on final approach.
Once the problem was known, it was possible to "smarten up" the radar.

   Similarly, in a C3I system, what is the probability that incorrect
inputs will be detected?  The original error may be an operator error,
but the error detection software gets tagged with the error budget for
undetected errors.

    Last but not least, there is the ugly ghost of Godel.  Systems
below a certain level of complexity can be 100% right or 100% wrong.
Above that line, software systems look a lot more like hardware.

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-27  0:00       ` Robert Dewar
@ 1996-05-28  0:00         ` JP Thornley
  1996-05-29  0:00           ` Ken Garlington
  0 siblings, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-05-28  0:00 UTC (permalink / raw)



In article: <dewar.833197805@schonberg>  Robert Dewar writes:
> 
> JP Thornley said
> 
> "My view is that code can never be judged as safe or unsafe - only
> correct or incorrect.  However my usage of the words "safe" - and
> "safety-critical" carries a lot of additional baggage, and it is
> possible that we are differing over the meaning of these words rather
> than anything fundamental.
> "
> 
> I think that is completely wrong. Correctness, i.e. formal conformance
> between the implementation and the specification, is neither necessary
> nor sufficient for safety.
> 
> It is not necessary, because there can be deviations that are not
> life-critical, e.g. if the horizon display on the pilots console
> is not the specified shade of green, it is not critical.
> 
> It is not sufficient, because the formal specification may be incomplete
> or incorrect.
> 

But I am talking only about those software components of the system that 
have been rated as safety-critical - so, by definition, a failure of 
that component to meet its requirements creates an uncontrolled risk of 
a hazard occuring.  I would be surprised if the exact shade of green 
on a display were to be rated safety-critical.  (I suspect that it is 
unusual for any part of a display to be rated as safety-critical as 
there will always be multiple independent sources of information).

Incomplete or incorrect requirements for a software component affect 
*system* safety - just as they would for any other type of component.  

Clearly it is a software engineering responsibility to check the 
requirements for incompleteness and ambiguity but, for example, if an 
algorithm is specified incorrectly and this results in a valve opening 
instead of it remaining closed, I do not see what is gained by claiming 
that the software which implements that algorithm is unsafe.  As another 
way of looking at this, what actions can a software engineer take to 
create safe sofware from potentially incorrect requirements (apart from
being a better domain expert than the systems engineer and getting the 
requirements changed).

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-28  0:00         ` JP Thornley
@ 1996-05-29  0:00           ` Ken Garlington
  1996-05-29  0:00             ` Robert A Duff
  0 siblings, 1 reply; 39+ messages in thread
From: Ken Garlington @ 1996-05-29  0:00 UTC (permalink / raw)



JP Thornley wrote:
> 
> But I am talking only about those software components of the system that
> have been rated as safety-critical - so, by definition, a failure of
> that component to meet its requirements creates an uncontrolled risk of
> a hazard occuring.  I would be surprised if the exact shade of green
> on a display were to be rated safety-critical.  (I suspect that it is
> unusual for any part of a display to be rated as safety-critical as
> there will always be multiple independent sources of information).

Keen! A discussion of software safety in an Ada conference! :)

Actually, I have seen displays rated safety-critical, even in the presence
of multiple sources of information. For example, if a head-up display
shows critical flight information, the HUD might be safety-critical even
in the presence of backup displays, since the pilot may be in a regime where
reference to head-down information is impractical. There may also be a safety
risk if the pilot is presented with conflicting data from multiple displays
(one correct, one failed).

> Clearly it is a software engineering responsibility to check the
> requirements for incompleteness and ambiguity but, for example, if an
> algorithm is specified incorrectly and this results in a valve opening
> instead of it remaining closed, I do not see what is gained by claiming
> that the software which implements that algorithm is unsafe.  As another
> way of looking at this, what actions can a software engineer take to
> create safe sofware from potentially incorrect requirements (apart from
> being a better domain expert than the systems engineer and getting the
> requirements changed).

The claim Dr. Levison makes in "Safeware" is that the protection is provided
by having independent hardware fail-safes for safety-critical software. For
example, in the Therac-25 example, having a hardware device that shuts down
the beam automatically after a fixed time limit. The fail-safe doesn't have
to duplicate the function of the software; it just has to provide a minimal
shutdown capacity. I still haven't figured out a practical way to do this for
my system, but I'm sure it's a good idea for certain systems. At least
in my environment, the software engineer provides feedback to the domain
engineer, so I suppose it is a software engineering job to get requirements
changed, suggest additional safety features, etc.

It sounds like the point has already been made, but it is also good to
remmeber that, technically, correctness and safety don't have to be related.
You can have correct software that is unsafe, and incorrect software that
is safe.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-29  0:00           ` Ken Garlington
@ 1996-05-29  0:00             ` Robert A Duff
  1996-05-30  0:00               ` Software Safety (was: Need help with PowerPC/Ada and realtime tasking) Ken Garlington
  1996-05-30  0:00               ` Need help with PowerPC/Ada and realtime tasking JP Thornley
  0 siblings, 2 replies; 39+ messages in thread
From: Robert A Duff @ 1996-05-29  0:00 UTC (permalink / raw)



In article <31AC0712.29DF@lmtas.lmco.com>,
Ken Garlington  <garlingtonke@lmtas.lmco.com> wrote:
>...At least
>in my environment, the software engineer provides feedback to the domain
>engineer, so I suppose it is a software engineering job to get requirements
>changed, suggest additional safety features, etc.

A very good point.  If a brick-layer notices that laying bricks the way
the architect said is impossible or stupid or even just questionable,
then the brick-layer ought to notify the architect that something is,
perhaps, screwed up.

>It sounds like the point has already been made, but it is also good to
>remmeber that, technically, correctness and safety don't have to be related.
>You can have correct software that is unsafe, and incorrect software that
>is safe.

I suppose it depends on your definition of "correct".  The
proof-of-correctness folks define "correct" to mean "correctly obeys the
formal specification".  To me, that's a bogus definition.  In plain
English, a "correct" program is one that does what it's supposed to do,
regardless of whether the specification is wrong.  If you show me a word
processor that deletes my file when I told it to italicize a word, to
me, that's incorrect, even if you can show me a (bogus) formal spec that
says that the "italicize word" function should delete files.  If you
show me a flight-control program that crashes airplanes, that's
incorrect IMHO, despite the fact that you might show me a formal spec
saying it did what the formal spec says.

- Bob




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-28  0:00   ` Robert I. Eachus
@ 1996-05-30  0:00     ` JP Thornley
  1996-06-03  0:00       ` Ken Garlington
  0 siblings, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-05-30  0:00 UTC (permalink / raw)



In article: <EACHUS.96May28192552@spectre.mitre.org>  
eachus@spectre.mitre.org (Robert I. Eachus) writes:
>                        .........  There are failure rates for the
> sensors, some tied to the geometry of the radar or the signatures of
> the target, and some failures due to the algorithms (and software)
> used. 

As I read this, the failure rate is connected with the choice of 
algorithm.  If this is not specified in the requirements then these 
failure rates (resulting from the algorithm chosen by the software 
engineer) have to be brought into the design and code level hazard 
analyses, so that they can be fed into the wider system safety 
assessment.  But the point that this responded to is not talking about 
failure rates of algorithms in this sense, but failure rates of the 
implementation in producing the 'correct' value according to the 
algorithm specification (and yes, the software engineer is responsible 
for the error bounds on the implementation method chose).

   [snip]

>     Last but not least, there is the ugly ghost of Godel.  Systems
> below a certain level of complexity can be 100% right or 100% wrong.
> Above that line, software systems look a lot more like hardware.

I wonder if this is where computer science and (my sort of) software 
engineering part company.  Or do you while away the hours when 
travelling by Boeing 777 wondering whether the flight control computers 
are about to be presented with an undecidable proposition?

> 
> --
> 
> 					Robert I. Eachus
> 
> with Standard_Disclaimer;
> use  Standard_Disclaimer;
> function Message (Text: in Clever_Ideas) return Better_Ideas is...
> 
> 
-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-29  0:00             ` Robert A Duff
  1996-05-30  0:00               ` Software Safety (was: Need help with PowerPC/Ada and realtime tasking) Ken Garlington
@ 1996-05-30  0:00               ` JP Thornley
  1996-05-31  0:00                 ` Ken Garlington
  1 sibling, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-05-30  0:00 UTC (permalink / raw)



In article: <Ds6M4D.E09@world.std.com>  bobduff@world.std.com (Robert A 
Duff) writes:
> I suppose it depends on your definition of "correct".  The
> proof-of-correctness folks define "correct" to mean "correctly obeys the
> formal specification".  To me, that's a bogus definition.  In plain
> English, a "correct" program is one that does what it's supposed to do,
> regardless of whether the specification is wrong.  If you show me a word
> processor that deletes my file when I told it to italicize a word, to
> me, that's incorrect, even if you can show me a (bogus) formal spec that
> says that the "italicize word" function should delete files. 

I'd be more impressed by this argument if the examples (such as 
this and "safety-critical green") were more realistic.
[But see below]

                                                           If you
> show me a flight-control program that crashes airplanes, that's
> incorrect IMHO, despite the fact that you might show me a formal spec
> saying it did what the formal spec says.
> 

OK, a slightly better example (I assume that you don't mean that the 
requirement has a 'crash plane' operation).

But how does this point of view influence the production of the code 
that implements the requirements (other than by asking the software 
engineer to be aware of possible errors and omissions and looking out 
for them - a point that has already been covered in these discussions).

There seem to be three main software engineering steps to implementing 
safety-critical code.

1. Examine the requirements and satisfy yourself that they are 
acceptable.

2. Implement the requirements as specified (a *correct* implementation).
This process can bring out further anomalies in the requirements.

3. Identify any additional potential failures that have been introduced 
by the implementation and deal with them (either at the software or 
the system level).  For example how do you recognise and deal with 
corruption of data stored in RAM.

If you demand a correct/safe implementation in code of a flight control
system from requirements that are incorrect/unsafe then what steps 
would you take to achieve it?

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-28  0:00   ` Robert I. Eachus
@ 1996-05-30  0:00     ` JP Thornley
  0 siblings, 0 replies; 39+ messages in thread
From: JP Thornley @ 1996-05-30  0:00 UTC (permalink / raw)



In article: <EACHUS.96May28185945@spectre.mitre.org>  
eachus@spectre.mitre.org (Robert I. Eachus) writes:
> ... other validations may matter more.

And the one that matters most is our own.

The primary need is an acceptable level of assurance that the compiler 
has no code generation fault that could affect the executable code in 
the box.

So, the starting point is a (probably) validated compiler that is known 
to have had a wide usage in different aplications.  The bug list must be 
stable and all the code generation bugs should be definable in terms of 
the Ada source constructs that may raise the bug - this is so that  
avoidance strategies and checking actions can be laid down to give 
assurance that the construct is not used (if this is not possible then a 
method for inspecting the executable for the error must be available).

The likelihood of the discovery of new code generation faults must also 
be low, as the effort needed to deal with them after the start of coding 
could be unacceptably large.

That is the starting point for our own validation activities - which, in 
the past, have been based around a set of test cases that use every 
language construct (in the subset, not the full language) in every 
context in which they might appear.  These test cases are compiled and 
the object code checked for acceptability.  Any anomalies found here 
will place further restrictions on the code (for example, with a 
compiler currently in use, address representation clauses in package 
bodies are avoided).

If this still looks OK then the compiler is judged acceptable for 
safety-critical code.

Subsequently, as the application code is produced, samples are compiled 
and the object code reviewed against the source to maintain confidence 
that the application is not going into untested areas of the compiler.  
Any compilation units that do not use the predefined compiler switches 
also have the object code inspected.

If the level of effort required for the test cases is not feasible for a 
particular project/compiler then it is necessary to inspect 100% of the 
compiler output against the source.

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-30  0:00               ` Software Safety (was: Need help with PowerPC/Ada and realtime tasking) Ken Garlington
@ 1996-05-30  0:00                 ` Robert Dewar
  1996-06-02  0:00                   ` JP Thornley
  1996-06-03  0:00                   ` Robert A Duff
  1996-05-31  0:00                 ` Robert A Duff
  1 sibling, 2 replies; 39+ messages in thread
From: Robert Dewar @ 1996-05-30  0:00 UTC (permalink / raw)



Bob Duff said (if I have not lost track of who said what :-)

"> I suppose it depends on your definition of "correct".  The
> proof-of-correctness folks define "correct" to mean "correctly obeys the
> formal specification".  To me, that's a bogus definition.  In plain
> English, a "correct" program is one that does what it's supposed to do,
> regardless of whether the specification is wrong.  If you show me a word
> processor that deletes my file when I told it to italicize a word, to
> me, that's incorrect, even if you can show me a (bogus) formal spec that
> says that the "italicize word" function should delete files.  If you
> show me a flight-control program that crashes airplanes, that's
> incorrect IMHO, despite the fact that you might show me a formal spec
> saying it did what the formal spec says."

What's the point of degrading this useful technical term this way. By
your definition, correct just means good or some such subjective term.
The concept of obeying a formal specification is a useful one, and it
is one which has been given the name "correctness" in the programming
language area.

I admit is occasionally confusing when standard English words are (mis)used
in a specific technical way, but as long as everyone understands the
usage (and correctness has been used in this specific way for many years),m
then it is useful (after all the Ada 95 RM is full of normal English words
used in a non-standard way :-)






^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-27  0:00       ` Darren C Davenport
@ 1996-05-30  0:00         ` Ralph E. Crafts
  1996-05-31  0:00           ` JP Thornley
  0 siblings, 1 reply; 39+ messages in thread
From: Ralph E. Crafts @ 1996-05-30  0:00 UTC (permalink / raw)



Darren C Davenport (ddavenpo@redwood.dn.hac.com) wrote:
: JP Thornley <jpt@diphi.demon.co.uk> wrote:
: >I was beginning to wonder whether I was the only reader of cla still 
: >using Ada 83 until there were some recent posts from others in the same 
: >situation in another thread.  To put my situation more clearly, there is 
: >one safety-critical system going into system design later this year, 
: >first trails to be run in 1998 and delivery to the customer in 2000 
: >onwards - this system will use Ada 83 as the Ada 95 compilers won't be 
: >usable in that timescale.
: >
: >Phil Thornley


: Doesn't OC systems sell an Ada95 compiler for the PowerPC that's been
: validated?
: Seems like you ought to check out what the vendors provide before saying
: that there won't be any Ada95 compilers in that timescale.

: Darren

Thank you, Darren, for mentioning our Ada 95 compiler.  I have refrained
from weighing in on this thread up to now, but now I have to respond.

As all experienced Ada people know, validation does not automatically
equate to usable.  In my 15+ years in the Ada market, I have seen 
innumerable instances where validated compilers were just awful.  

In the case of the OC Systems product, PowerAda, we have been shipping
the product, and our customers have been using it, for several years.
The Ada 95 compiler was validated in December 1995, and has been 
shipping as part of the product since January.  The fact is, our
customers are using PowerAda for Ada 95 development--the product is
usable now.  

This is not to say that PowerAda is perfect, nor is "all" of Ada 95 
included--we will continue to improve and add additional features, 
including the Ada 95 Annexes.  I don't believe that Phil is accurate
in his statement that Ada 95 compilers "won't be usable" in the period
from now to 1998-2000.  

For embedded safety-critical systems, we validated using LynxOS (in 
addition to AIX).  LynxOS has been used in many safety-critical systems,
and has been certified by organizations such as the FAA and NASA for 
such applications.  LynxOS has also met or exceeded some commercial
certification requirements for medical applications.  It is POSIX 
compliant, and supported worldwide (as is PowerAda).

For the record, OC Systems is more than willing to work with companies/
organizations to implement Ada 95 features which are necessary for the
kinds of applications described by Phil.  I certainly believe that our
product can support his needs in the timeframe he describes.

Best regards to all,

--Ralph Crafts


-- 

     - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
     Ralph E. Crafts, Vice President            Voice:  703-359-9709
     Sales & Marketing                          FAX:    703-359-8161




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-29  0:00             ` Robert A Duff
@ 1996-05-30  0:00               ` Ken Garlington
  1996-05-30  0:00                 ` Robert Dewar
  1996-05-31  0:00                 ` Robert A Duff
  1996-05-30  0:00               ` Need help with PowerPC/Ada and realtime tasking JP Thornley
  1 sibling, 2 replies; 39+ messages in thread
From: Ken Garlington @ 1996-05-30  0:00 UTC (permalink / raw)



Robert A Duff wrote:
> 
> In article <31AC0712.29DF@lmtas.lmco.com>,
> Ken Garlington  <garlingtonke@lmtas.lmco.com> wrote:
> 
> >It sounds like the point has already been made, but it is also good to
> >remmeber that, technically, correctness and safety don't have to be related.
> >You can have correct software that is unsafe, and incorrect software that
> >is safe.
> 
> I suppose it depends on your definition of "correct".  The
> proof-of-correctness folks define "correct" to mean "correctly obeys the
> formal specification".  To me, that's a bogus definition.  In plain
> English, a "correct" program is one that does what it's supposed to do,
> regardless of whether the specification is wrong.  If you show me a word
> processor that deletes my file when I told it to italicize a word, to
> me, that's incorrect, even if you can show me a (bogus) formal spec that
> says that the "italicize word" function should delete files.  If you
> show me a flight-control program that crashes airplanes, that's
> incorrect IMHO, despite the fact that you might show me a formal spec
> saying it did what the formal spec says.

I have some sympathy for your more general definition, but there's also a
problem. For example, I cannot build a flight-control system that will
prevent aircraft crashes. No matter what I do, there will always be some
chain of events (plausible or otherwise) that will cause the plane to
crash. On an F-16, it's as simple as the pilot pointing the nose to the
ground too long, or flying through a sufficiently strong hurricane. Does
this make flight-control software inherently incorrect, or just inherently
unsafe under certain conditions?

Any requirements specification, formal or otherwise, has to make some
assumptions about the operating environment, and it has to make some
assumptions about "acceptable" chains of events that lead to unsafe
conditions. A requirements specification for a robot control arm may
assume that no humans will be in the vicinity during operation. This
may be a reasonable assumption, given the planned operating environment.
If a human is standing next to the arm during operation, and the arm
starts moving without warning and hits the human, was the software incorrect?
Was it unsafe? In this case, even with a general definition of "correctness",
the software might very well be considered correct, but unsafe (did not have
a warning function).

Note that the reverse is also possible. If the software was supposed to
command the robot arm to move, and it fails to do so, this might very
well be incorrect, but safe.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-30  0:00               ` Need help with PowerPC/Ada and realtime tasking JP Thornley
@ 1996-05-31  0:00                 ` Ken Garlington
  1996-06-02  0:00                   ` JP Thornley
  0 siblings, 1 reply; 39+ messages in thread
From: Ken Garlington @ 1996-05-31  0:00 UTC (permalink / raw)



JP Thornley wrote:

> If you demand a correct/safe implementation in code of a flight control
> system from requirements that are incorrect/unsafe then what steps
> would you take to achieve it?

Actually, there are some things that can be _attempted_ at the implementation
level, although your results may vary:

1. You can have the system stop processing when an unexpected event occurs,
such as an Ada exception or a hardware interrupt. This assumes that denial of
service is not a safety-critical event in itself, of course.

2. You can create a "kernel mode," representing a subset of the requirements
that are absolutely necessary for safe operation. Using some means, such as
an operator input, you can cause the system to stop processing all code not
in the "kernel" when a fault is detected. This approach works on the theory that the 
code in the "kernel" is used frequently, and so have a lot of test and operational
time. Conversely, the "non-kernel" code is used less frequently, and therefore
might have a potentially higher risk of a fault (either due to code
error or requirements error) than the "kernel." Clearly, the reversion to the
"kernel" must be done in an orderly fashion, with appropriate operator feedback.

3. You can use checkpoint/rollback techniques to try to overcome input errors
that were not adequately caught by the required input checking. I'm skeptical
of this approach in a dynamic environment, but there's a lot of literature
available on this type of technique.

Personally, I think it's better to use failure modes and effects analysis on
the requirements, and do FMET testing after development, than to depend on this
stuff. However, some of this might work in some areas.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
                     ` (3 preceding siblings ...)
  1996-05-28  0:00   ` Robert I. Eachus
@ 1996-05-31  0:00   ` Robert I. Eachus
  1996-06-03  0:00   ` Ralph Paul
  5 siblings, 0 replies; 39+ messages in thread
From: Robert I. Eachus @ 1996-05-31  0:00 UTC (permalink / raw)



In article <637048781wnr@diphi.demon.co.uk> JP Thornley <jpt@diphi.demon.co.uk> writes:

  > I wonder if this is where computer science and (my sort of) software 
  > engineering part company.  Or do you while away the hours when 
  > travelling by Boeing 777 wondering whether the flight control computers 
  > are about to be presented with an undecidable proposition?

   I wish I could take this as a joke.  In commercial avionics, it is
certainly a goal, and sometimes an achievable one, to design control
systems to stay out of those modes.  But in "high-performance" aircraft,
and in particular the Space Shuttle, it is not always possible to
come up with control laws which are not chaotic.  And placard or no,
the pilot will be in those regimes.

   In the case of the A320, I would say that the implicit denial by
the software design process that such things could happen was much
more at fault than the language chosen for implementations.  When the
software encountered situations not imagined by the specification
authors, it behaved in predictable, but fatal fashion.  I much prefer
the design philoshy that things can go wrong, and you do want sanity
checks in the software.

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-30  0:00               ` Software Safety (was: Need help with PowerPC/Ada and realtime tasking) Ken Garlington
  1996-05-30  0:00                 ` Robert Dewar
@ 1996-05-31  0:00                 ` Robert A Duff
  1996-06-03  0:00                   ` Ken Garlington
  1 sibling, 1 reply; 39+ messages in thread
From: Robert A Duff @ 1996-05-31  0:00 UTC (permalink / raw)



In article <31AD794D.2E62@lmtas.lmco.com>,
Ken Garlington  <garlingtonke@lmtas.lmco.com> wrote:
>I have some sympathy for your more general definition, but there's also a
>problem. For example, I cannot build a flight-control system that will
>prevent aircraft crashes. No matter what I do, there will always be some
>chain of events (plausible or otherwise) that will cause the plane to
>crash. On an F-16, it's as simple as the pilot pointing the nose to the
>ground too long, or flying through a sufficiently strong hurricane. Does
>this make flight-control software inherently incorrect, or just inherently
>unsafe under certain conditions?

Agreed.  We don't know for sure if that crash was the fault of the
software or the pilot.  We can study the circumstances, and come to an
opinion, but intelligent people will disagree on the point.

My objection to using the word "correct" to mean "software conforms to
some (possibly bogus) formal spec", is that it *automatically* blames
the pilot, even in cases where intelligent people would blame the
software.  Suppose the software gave confusing info to the pilot.  I
don't want some programmer saying, "But, the formal spec *required* the
software to give that confusing info, and we used formal correctness
proofs to ensure that."

The problem is that, in English, "correct" is an absolute term.  If you
say "This flight-control software has been mathematically proven to be
correct", there's a danger that non-programmers will believe, "They've
proven (beyond any doubt) that the software will do what it ought to do"
(i.e. the non-programmer misses the point that "what it ought to do" is
not necessarily the same thing as what the formal spec said).

Using the term "correct" as the proof-of-correctness folks like to,
gives a false sense of security, IMHO.  *My* meaning for "correct" is
harder to deal with, of course, because it requires human judgement
(what the program *ought* to do, as opposed to what somebody formally
*specified* it should do).  Well, that's life.

>Any requirements specification, formal or otherwise, has to make some
>assumptions about the operating environment, and it has to make some
>assumptions about "acceptable" chains of events that lead to unsafe
>conditions. A requirements specification for a robot control arm may
>assume that no humans will be in the vicinity during operation. This
>may be a reasonable assumption, given the planned operating environment.
>If a human is standing next to the arm during operation, and the arm
>starts moving without warning and hits the human, was the software incorrect?
>Was it unsafe? In this case, even with a general definition of "correctness",
>the software might very well be considered correct, but unsafe (did not have
>a warning function).

Was it unsafe?  I don't know.  It requires some human beings who know
about the situation to judge whether it was reasonable to assume that
the robot is out of reach of humans.  No mathematical proof techniques
are going to help you, there.  I think that if you mathematically prove
that program X obeys formal spec Y, then you should say, "I proved X
obeys Y", and not "I proved X is correct".  Even "I proved X correct
with respect to Y" is subject to misinterpretation, and the false sense
of security (which might lead to the robot boss telling a person to go
over there near the robot and...).

>Note that the reverse is also possible. If the software was supposed to
>command the robot arm to move, and it fails to do so, this might very
>well be incorrect, but safe.

Agreed.

- Bob




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-30  0:00         ` Ralph E. Crafts
@ 1996-05-31  0:00           ` JP Thornley
  1996-06-03  0:00             ` Ken Garlington
  0 siblings, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-05-31  0:00 UTC (permalink / raw)



In article: <1996May30.103842.17355@ocsystems.com>  rec@ocsystems.com 
(Ralph E. Crafts) writes:
>                                 I don't believe that Phil is accurate
> in his statement that Ada 95 compilers "won't be usable" in the period
> from now to 1998-2000.  

This thread is about safety-critical software and "usable" has to be 
seen in that context.

Ralph, you may by now have seen my post on what it takes for us to 
accept a compiler - that post and your's crossed in my modem.

In particular there are severe requirements on what is a suitably mature 
compiler for us to start to look at it.  I haven't connected to see any 
responses to that post yet, but I'll be disappointed if no-one mentions 
GNAT (because I've made a bet with myself that some-one will).  However 
I imagine that GNAT would fail on stability grounds at the moment - 
there are clearly lots of users and lots of different sorts of use, but 
unless *one version* of it gets long-term usage, it can't be considered 
mature.

Your reference to continuing to add features suggests that PowerAda is 
in the same position.  (Please don't take this as a criticism - it's a
simple fact of life in the current state of the development of Ada 95
as a language.)

I often worry that safety-critical software development makes heavy 
demands on the supporting tools and the market for those tools is very 
small (I've seen a figure of 1% of Ada being safety-critical) so that 
long term vendor support must be questionable.  And I'll continue to 
worry even though at least two compiler vendors seem to be investing in 
this area at the moment.

Phil Thornley

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-31  0:00                 ` Ken Garlington
@ 1996-06-02  0:00                   ` JP Thornley
  1996-06-03  0:00                     ` Ken Garlington
  0 siblings, 1 reply; 39+ messages in thread
From: JP Thornley @ 1996-06-02  0:00 UTC (permalink / raw)



Ken Garlington writes:
> 1. You can have the system stop processing when an unexpected event 
> occurs, ...
> 2. You can create a "kernel mode," representing a subset of the 
> requirements that are absolutely necessary for safe operation. ...
> 3. You can use checkpoint/rollback techniques ...
> 

and Robert I. Eachus writes:
>                                           ...... I much prefer
> the design philoshy that things can go wrong, and you do want sanity
> checks in the software.

I have no problem in agreeing with any of this, and would expect to see 
appropriate safeguards such as these and others being built into the 
software, but all of these techniques create system level behaviours 
that cannot be defined solely by the software engineer - the resulting 
system states must be visible in the system description and an obvious 
place is the software requirements.  If they are not in there as written 
then they should be introduced as the software implementation creates 
the need/opportunity to define them.

One certain sure route to unsafe systems is allowing the software to 
create undocumented system states that have not featured in the system 
safety analysis.

A factor that has not been mentioned yet is traceability, and I had 
rather assumed that everyone else was in a situation where every feature 
of the software is required to be traceable to a software requirement.  
So a correct (in my terms) implementation would also include the safe 
(in your terms) features that you are both asking for.

Ken Garlington also writes:
> Personally, I think it's better to use failure modes and effects 
> analysis on the requirements,

Agree absolutely - so this stuff has to *be* in the requirements - 

>  and do FMET testing after development, than to depend on this
> stuff. 

FMET is a new one on me (?Failure Modes Effect Testing??).  If that 
guess is right then I must admit to feeling uneasy about the ability 
of anyone to test thoroughly for all the effects of various wierd 
failures.

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-30  0:00                 ` Robert Dewar
@ 1996-06-02  0:00                   ` JP Thornley
  1996-06-03  0:00                   ` Robert A Duff
  1 sibling, 0 replies; 39+ messages in thread
From: JP Thornley @ 1996-06-02  0:00 UTC (permalink / raw)



Robert Dewar writes:
> 
> Bob Duff said:
> 
> "> I suppose it depends on your definition of "correct".  The
> > proof-of-correctness folks define "correct" to mean "correctly obeys the
> > formal specification".  To me, that's a bogus definition. ..."

> What's the point of degrading this useful technical term this way. ...

So if there are two concepts, let's have two names.

I would support the formal usage for 'correct' and suggest 'high 
integrity' as the ideal name for Bob Duff's needs.

This new (to this thread ;-) term can readily encompass the (striving 
for) correctness that I want, the 'safe' features that Bob Duff, Ken 
Garlington and Robert Eachus want and the predictability that is 
required in terms of the absence of run-time errors/timing errors/
stack overflow etc.

Unfortunately (for me at least) I am going to be away from the net for 
the next three weeks, so I'm going to either miss any other posts or 
be very late seeing them (depending on how far back my news supply 
goes) so I'll be grateful for any CC's that let me see what happens.
[However I will be at Ada-Europe and would welcome the chance to 
continue the discussions there if at all possible.]

Phil Thornley

-- 
------------------------------------------------------------------------
| JP Thornley    EMail jpt@diphi.demon.co.uk                           |
------------------------------------------------------------------------





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-18  0:00 ` JP Thornley
                     ` (4 preceding siblings ...)
  1996-05-31  0:00   ` Robert I. Eachus
@ 1996-06-03  0:00   ` Ralph Paul
  5 siblings, 0 replies; 39+ messages in thread
From: Ralph Paul @ 1996-06-03  0:00 UTC (permalink / raw)



Robert I. Eachus wrote:
> 
Hi,

>    In the case of the A320, I would say that the implicit denial by
> the software design process that such things could happen was much
> more at fault than the language chosen for implementations. 

Could you be a little more specific on this issue ?
What accident(s) are you talking about ?

Usally is the combination of pilot error, bad training or overconfidence
and system flaws that are to blame.  
 

-- 
Ralph Paul
				
					
	paul@aem.umn.edu
or	ralph@ifr.luftahrt.uni-stuttgart.de




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-30  0:00     ` JP Thornley
@ 1996-06-03  0:00       ` Ken Garlington
  0 siblings, 0 replies; 39+ messages in thread
From: Ken Garlington @ 1996-06-03  0:00 UTC (permalink / raw)



JP Thornley wrote:

> I wonder if this is where computer science and (my sort of) software
> engineering part company.  Or do you while away the hours when
> travelling by Boeing 777 wondering whether the flight control computers
> are about to be presented with an undecidable proposition?

I don't. Having been a part of a few flight control system development
efforts, I _know_ that the computers can be presented with an undecidable
proposition.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-05-31  0:00           ` JP Thornley
@ 1996-06-03  0:00             ` Ken Garlington
  0 siblings, 0 replies; 39+ messages in thread
From: Ken Garlington @ 1996-06-03  0:00 UTC (permalink / raw)



JP Thornley wrote:
> 
> This thread is about safety-critical software and "usable" has to be
> seen in that context.
> 
> In particular there are severe requirements on what is a suitably mature
> compiler for us to start to look at it.

I'm not sure about the length of your development cycle, but certainly it
is possible to _start_ with a compiler that is not fully mature, so long
as there is a adequate probability that the compiler will be fully mature
prior to the start of qualification testing. Doing this, of course, adds
some level of risk to a project from a management standpoint, but it isn't
a safety issue per se IMHO.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Need help with PowerPC/Ada and realtime tasking
  1996-06-02  0:00                   ` JP Thornley
@ 1996-06-03  0:00                     ` Ken Garlington
  0 siblings, 0 replies; 39+ messages in thread
From: Ken Garlington @ 1996-06-03  0:00 UTC (permalink / raw)



JP Thornley wrote:
> 
> I have no problem in agreeing with any of this, and would expect to see
> appropriate safeguards such as these and others being built into the
> software, but all of these techniques create system level behaviours...

Actually, it depends on your definition of "system behavior." Depending
upon the nature of the system, some of these recovery techniques may be
performed without a change in the externally visible state of the system.
Granted, this is not usually the case for real-time systems, but it is
possible. Nonetheless, I concur that they should be specified at the
appropriate requirements level.

> A factor that has not been mentioned yet is traceability, and I had
> rather assumed that everyone else was in a situation where every feature
> of the software is required to be traceable to a software requirement.
> So a correct (in my terms) implementation would also include the safe
> (in your terms) features that you are both asking for.

However, at the system level, the requirement may be very generic (e.g.
"the system will attempt to recover from invalid states,") while the
specific means used to do this (kernel mode, etc.) may be left to design.
For the system to be fully analyzed (e.g. in terms of MIL-STD-882B), the
design and implementation must also be considered. There is also considerable
leeway in the implementation, given such a generic requirement.

> FMET is a new one on me (?Failure Modes Effect Testing??).  If that
> guess is right then I must admit to feeling uneasy about the ability
> of anyone to test thoroughly for all the effects of various wierd
> failures.

Failures Modes Effects Testing looks at the system effect of various
failures and combinations of failures. While FMET is generally limited
to a fixed number of combinations, it is different from (and complementary with)
requirements-oriented testing in that it does not use requirements as a
driver - only a list of system inputs, and sufficient architectual information
to determine which failure modes are possible. Each system reaction
to a failure mode is independently evaluated as to its "reasonableness,"
particularly from a safety standpoint. Although FMET is not a guarantee as
to safety (for that matter, nothing is), we've found it to be a useful tool
in safety evaluations.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-30  0:00                 ` Robert Dewar
  1996-06-02  0:00                   ` JP Thornley
@ 1996-06-03  0:00                   ` Robert A Duff
  1996-06-05  0:00                     ` Norman H. Cohen
  1 sibling, 1 reply; 39+ messages in thread
From: Robert A Duff @ 1996-06-03  0:00 UTC (permalink / raw)



In article <dewar.833504787@schonberg>, Robert Dewar <dewar@cs.nyu.edu> wrote:
>What's the point of degrading this useful technical term this way. By
>your definition, correct just means good or some such subjective term.
>The concept of obeying a formal specification is a useful one, and it
>is one which has been given the name "correctness" in the programming
>language area.

Well, I would prefer to call this useful concept "obeying a formal
specification".  At least people ought to say "correct with respect to
formal spec X", rather than the shorthand "correct".  I must admit that
my opinion is pointless, since, as you say, the term "correct" is well
established, and nobody's going to listen to just *me*.

The reason I object to "correct" is that I've seen many cases where
people misunderstand the term.  Even people who ought to know better.
I've seen arguments along these lines: "I proved so-and-so program
correct.  Therefore, it obviously can't have any bugs, or do anything
wrong.  Therefore, there's no need to test it."  A bogus argument, but
it's easy to fool people with that sort of argument, because "correct"
really does mean "good" or "perfect" in plain English.

>I admit is occasionally confusing when standard English words are (mis)used
>in a specific technical way, but as long as everyone understands the
>usage (and correctness has been used in this specific way for many years),m
>then it is useful (after all the Ada 95 RM is full of normal English words
>used in a non-standard way :-)

Sure, but it's not so bad when a more-or-less neutral term is "misused"
that way -- the term is just gaining a new meaning, and one can
(hopefully) tell which meaning is meant from context.  It's much more of
a problem when the English term being hijacked has moral connotations,
as does "correct".

By the way, I suspect that proof techniques would be *more* popular
today, if the proponents had not been overselling their case for all
these years (e.g., saying that proofs avoid the need for testing, and
using loaded terms like "correct" to describe what they're doing).

- Bob




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-05-31  0:00                 ` Robert A Duff
@ 1996-06-03  0:00                   ` Ken Garlington
  0 siblings, 0 replies; 39+ messages in thread
From: Ken Garlington @ 1996-06-03  0:00 UTC (permalink / raw)



Robert A Duff wrote:
> 
> The problem is that, in English, "correct" is an absolute term.  If you
> say "This flight-control software has been mathematically proven to be
> correct", there's a danger that non-programmers will believe, "They've
> proven (beyond any doubt) that the software will do what it ought to do"
> (i.e. the non-programmer misses the point that "what it ought to do" is
> not necessarily the same thing as what the formal spec said).

True - which is why, if anyone were to say to me that their software was
"proven" to be "correct", I would consider them poor engineers if they
didn't also define their terms. (Personally, I don't believe it is possible
to "prove" flight control software even matches the spec. It is only possible
to reason about the spec mathemetically, and build some level of confidence
in its correctness.)

> Using the term "correct" as the proof-of-correctness folks like to,
> gives a false sense of security, IMHO.  *My* meaning for "correct" is
> harder to deal with, of course, because it requires human judgement
> (what the program *ought* to do, as opposed to what somebody formally
> *specified* it should do).  Well, that's life.

It also means that your definition of "correct" may not be desirable as
the basis for releasing flight control software to the field. Usually,
we want something more concrete than "it feels right." (Of course, it's
perfectly OK for the _developer_ to use an intuitive notion of "correct"
as one of the criteria :)

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-03  0:00                   ` Robert A Duff
@ 1996-06-05  0:00                     ` Norman H. Cohen
  1996-06-07  0:00                       ` Ken Garlington
  1996-06-08  0:00                       ` Robert Dewar
  0 siblings, 2 replies; 39+ messages in thread
From: Norman H. Cohen @ 1996-06-05  0:00 UTC (permalink / raw)



In article <DsFM3B.CqB@world.std.com>, bobduff@world.std.com (Robert A
Duff) writes: 

|> The reason I object to "correct" is that I've seen many cases where
|> people misunderstand the term.  Even people who ought to know better.

I agree.  About a decade ago, at workshop on formal verification of Ada
programs, I got into a heated argument with David Gries about a useful
definition of correctness.  I argued that it was useless to prove a
program conformant to formal specifications unless one had a high degree
of confidence that the specifications expressed the customer's intent.
Gries seemed to have in mind a model whereby the customer presents the
software developer with a contract expressed in the form of formal
specifications; if the software developer conforms to the specifications,
he has fulfilled the contract, even if the customer doesn't get what he
really wanted.

|> I've seen arguments along these lines: "I proved so-and-so program
|> correct.  Therefore, it obviously can't have any bugs, or do anything
|> wrong.  Therefore, there's no need to test it."  A bogus argument, but
|> it's easy to fool people with that sort of argument, because "correct"
|> really does mean "good" or "perfect" in plain English.

And yet there have been studies analyzing the causes of software "errors"
and attributing roughly half of them to mistakes in the specifications!

--
Norman H. Cohen    ncohen@watson.ibm.com




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-05  0:00                     ` Norman H. Cohen
@ 1996-06-07  0:00                       ` Ken Garlington
  1996-06-12  0:00                         ` Norman H. Cohen
  1996-06-08  0:00                       ` Robert Dewar
  1 sibling, 1 reply; 39+ messages in thread
From: Ken Garlington @ 1996-06-07  0:00 UTC (permalink / raw)



Norman H. Cohen wrote:

> Gries seemed to have in mind a model whereby the customer presents the
> software developer with a contract expressed in the form of formal
> specifications; if the software developer conforms to the specifications,
> he has fulfilled the contract, even if the customer doesn't get what he
> really wanted.

This sounds reasonable. Consider the example of someone buying a compiler,
and saying, "I want a compiler that passes the ACVC!" So, the vendor provides
such a compiler, and thus expects payment for completing the contract. Then,
the user says, "Wait a minute! The generated code is too slow! I'm not paying!"
I wouldn't conside that complaint sufficient to withhold payment, if execution 
performance wasn't part of the agreement...

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-05  0:00                     ` Norman H. Cohen
  1996-06-07  0:00                       ` Ken Garlington
@ 1996-06-08  0:00                       ` Robert Dewar
  1996-06-08  0:00                         ` Robert A Duff
  1 sibling, 1 reply; 39+ messages in thread
From: Robert Dewar @ 1996-06-08  0:00 UTC (permalink / raw)



"I agree.  About a decade ago, at workshop on formal verification of Ada
programs, I got into a heated argument with David Gries about a useful
definition of correctness.  I argued that it was useless to prove a
program conformant to formal specifications unless one had a high degree
of confidence that the specifications expressed the customer's intent.
Gries seemed to have in mind a model whereby the customer presents the
software developer with a contract expressed in the form of formal
specifications; if the software developer conforms to the specifications,
he has fulfilled the contract, even if the customer doesn't get what he
really wanted.
"

I certainly agree that it is unfortunate that the term correct has been
hijacked, however, it is tilting at windmills to try to change this now
(sort of like my efforts to prevent the misuse of moot and oxymoron :-)

Your "heated argument" reminds me of a conference I was at where Niklaus
Wirth was talking about programming being a process of refinement whre
the one and only issue was that refinement should be correctness preserving.

At the end of the panel session, I commented that the whole notion of 
correctness is misguided, because the issue is building reliable software,
not correct software. I used as an anology car manufacture, car companies
are interested in building reliable cars, almost no one would talk about
a correct car. And indeed almost any car does NOT meet its specifications,
but the issue is whether the defect are minor. A small area of miscolered
paint under the rear bumper renders a car incorrect, but not unreliable!

In the case of softare, correct and reliable are of course NOT the same.
Reliability includes the specificatoin being correct, and of course
we have no way of proving a specification correct, at most we could
prove it consistent, but that might simply mean it is consistently wrong!
Actually as specifications get more formal, it often gets harder and harder
to determine if it is correct. Try looking at the formal definition done
by the EEC for Ada 83, it is two fat telephone books of mathematical
formula -- and there is no way of ensuring it is correct [in fact, as
would be expected for what is essentially a huge program that has never
been run, it is not correct].

On the other hand, a program that trivially departs from its spec (background
of the GUI is slightly different shade of green than specified for example)
may still be completely reliable even though it is not correct.

So I prefer to leave the term correctness to (mis)mean what David Gries and
Niklaus Wirth (and many others) insist on, and simply to recognize that it is
not the most important property, and if I am a customer I want a program
that works, a correct (in this sense) program that does not work is of
no interest to me, and I will not sign a contract that says that all the
program has to do is to implement correctly some formal specification!

P.S. At that panel session, when I made the comment about cars not having
to be correct, the audience, quite frustrated I think with the panel
presentation, applauded the comment loudly :-)





^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-08  0:00                       ` Robert Dewar
@ 1996-06-08  0:00                         ` Robert A Duff
  0 siblings, 0 replies; 39+ messages in thread
From: Robert A Duff @ 1996-06-08  0:00 UTC (permalink / raw)



In article <dewar.834237518@schonberg>, Robert Dewar <dewar@cs.nyu.edu> wrote:
>I certainly agree that it is unfortunate that the term correct has been
>hijacked, however, it is tilting at windmills to try to change this now
>(sort of like my efforts to prevent the misuse of moot and oxymoron :-)

You tilt at "moot" and "oxymoron".  I tilt at "correct".  We all have
our pet peeves, I suppose.  If we think we can actually *change* any
such usage, we're either prophets or deranged (usually the latter).

;-)

Natural language changes, not always for the better.

>In the case of softare, correct and reliable are of course NOT the same.
>Reliability includes the specificatoin being correct, and of course
>we have no way of proving a specification correct, at most we could
>prove it consistent, but that might simply mean it is consistently wrong!

Indeed.

>Actually as specifications get more formal, it often gets harder and harder
>to determine if it is correct. Try looking at the formal definition done
>by the EEC for Ada 83, it is two fat telephone books of mathematical
>formula -- and there is no way of ensuring it is correct [in fact, as
>would be expected for what is essentially a huge program that has never
>been run, it is not correct].

There are two things at work here: (1) as one gets more formal, one gets
a deeper understanding, and eliminates bugs, and (2) as one gets more
formal, one gets further removed from the intuitive notion of what the
software really ought to do, and this causes bugs.

>On the other hand, a program that trivially departs from its spec (background
>of the GUI is slightly different shade of green than specified for example)
>may still be completely reliable even though it is not correct.

True, but you have to admit that this is less important than the other
way 'round, where software obeys its spec but does some damage.  I will
tolerate when somebody *insists* on the correct shade of green, if they
also insist on things that really *do* matter.

And one shouldn't forget that about half of the bugs really are the kind
of "plain old bugs" where the software doesn't do what the spec says it
should, and the spec is, in fact, right, and everybody agrees it's a
bug.

- Bob




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-12  0:00                         ` Norman H. Cohen
@ 1996-06-12  0:00                           ` Ken Garlington
  0 siblings, 0 replies; 39+ messages in thread
From: Ken Garlington @ 1996-06-12  0:00 UTC (permalink / raw)



Norman H. Cohen wrote:
>  
> The question is whether the compiler vendor is more interested in
> satisfying the customer or beating him in court.
> 
> Which do you think is more conducive to repeat business?

Given the example of Microsoft, IBM, etc., the _latter_ (when coupled
with effective marketing).

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Software Safety (was: Need help with PowerPC/Ada and realtime tasking)
  1996-06-07  0:00                       ` Ken Garlington
@ 1996-06-12  0:00                         ` Norman H. Cohen
  1996-06-12  0:00                           ` Ken Garlington
  0 siblings, 1 reply; 39+ messages in thread
From: Norman H. Cohen @ 1996-06-12  0:00 UTC (permalink / raw)



In article <31B7E171.7144@lmtas.lmco.com>, Ken Garlington
<garlingtonke@lmtas.lmco.com> writes: 

|> This sounds reasonable. Consider the example of someone buying a compiler,
|> and saying, "I want a compiler that passes the ACVC!" So, the vendor provides
|> such a compiler, and thus expects payment for completing the contract. Then,
|> the user says, "Wait a minute! The generated code is too slow! I'm not paying!"
|> I wouldn't conside that complaint sufficient to withhold payment, if execution
|> performance wasn't part of the agreement...

The question is whether the compiler vendor is more interested in
satisfying the customer or beating him in court.

Which do you think is more conducive to repeat business?

--
Norman H. Cohen    ncohen@watson.ibm.com




^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~1996-06-12  0:00 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1996-05-17  0:00 Need help with PowerPC/Ada and realtime tasking Dave Struble
1996-05-18  0:00 ` JP Thornley
1996-05-20  0:00   ` Robert I. Eachus
1996-05-21  0:00     ` Michael Levasseur
1996-05-21  0:00   ` Richard Riehle
1996-05-25  0:00     ` JP Thornley
1996-05-27  0:00       ` Darren C Davenport
1996-05-30  0:00         ` Ralph E. Crafts
1996-05-31  0:00           ` JP Thornley
1996-06-03  0:00             ` Ken Garlington
1996-05-28  0:00       ` Tasking in safety-critical software (!) (was Re: Need help with PowerPC/Ada and realtime tasking) Kevin F. Quinn
1996-05-25  0:00     ` Need help with PowerPC/Ada and realtime tasking JP Thornley
1996-05-27  0:00       ` Robert Dewar
1996-05-28  0:00         ` JP Thornley
1996-05-29  0:00           ` Ken Garlington
1996-05-29  0:00             ` Robert A Duff
1996-05-30  0:00               ` Software Safety (was: Need help with PowerPC/Ada and realtime tasking) Ken Garlington
1996-05-30  0:00                 ` Robert Dewar
1996-06-02  0:00                   ` JP Thornley
1996-06-03  0:00                   ` Robert A Duff
1996-06-05  0:00                     ` Norman H. Cohen
1996-06-07  0:00                       ` Ken Garlington
1996-06-12  0:00                         ` Norman H. Cohen
1996-06-12  0:00                           ` Ken Garlington
1996-06-08  0:00                       ` Robert Dewar
1996-06-08  0:00                         ` Robert A Duff
1996-05-31  0:00                 ` Robert A Duff
1996-06-03  0:00                   ` Ken Garlington
1996-05-30  0:00               ` Need help with PowerPC/Ada and realtime tasking JP Thornley
1996-05-31  0:00                 ` Ken Garlington
1996-06-02  0:00                   ` JP Thornley
1996-06-03  0:00                     ` Ken Garlington
1996-05-28  0:00   ` Robert I. Eachus
1996-05-30  0:00     ` JP Thornley
1996-05-28  0:00   ` Robert I. Eachus
1996-05-30  0:00     ` JP Thornley
1996-06-03  0:00       ` Ken Garlington
1996-05-31  0:00   ` Robert I. Eachus
1996-06-03  0:00   ` Ralph Paul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox