From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,59dddae4a1f01e1a X-Google-Attributes: gid103376,public From: eachus@spectre.mitre.org (Robert I. Eachus) Subject: Re: Need help with PowerPC/Ada and realtime tasking Date: 1996/05/28 Message-ID: X-Deja-AN: 157237557 references: <1026696wnr@diphi.demon.co.uk> organization: The Mitre Corp., Bedford, MA. newsgroups: comp.lang.ada Date: 1996-05-28T00:00:00+00:00 List-Id: In article <122916091wnr@diphi.demon.co.uk> JP Thornley writes: > Several studies into safety-critical subsets have all rejected tasking:- > Safe Ada > SPARK Ada > High Integrity Ada Study (YSE/BAe) > CSMART Ada > so there is going to be a major credibility problem convincing a > qualification authority to go along with tasking. I won't go too deep into the gory details, but there is Ada tasking and Ada tasking. There are some features which anyone writing or approving safety-critical applications should reject, and there are some features that most real-time systems need. The problem is where to draw the line. A simple example of something that should be verboten is dynamic creation of tasks. If you can't tell before the program runs how many tasks are needed and how much memory they will use, there is no way to certify the software. If you can tell,the most sensible thing to do is to create those tasks at the start, and leave out heap and memory management. Now look at RM95 D.7. It spells out arguments to praga Restrictions which can be used to ensure such an environment: Max_Tasks, No_Task_Hierarchy, and No_Task_Allocators, and No_Implicit_Heap_Allocations. > Tasks in the application require tasking run-time support. This > will therefore need to be qualified to safety-critical standards > (ie reasonable expectation of zero failures, see other post in > this thread). Since I've never done it, I don't know what it > would take in terms of effort, but it can't be anything other that > a very major undertaking. A restricted run-time (with the above restrictions plus, among others, No_Abort_Statements and No_Terminate_Alternatives) should be small and easy to certify. The parts of tasking that make the run-time complex have no place in real-time or safety-critical code. On the other hand, they are necessary when creating robust large complex information systems, and sometimes you need a mix of both. You know that awful feeling on a PC or wherever, when the screen stops responding to your inputs due to an error in the input handler? A "dead man" timer can abort that task and replace it. Trivial with a good Ada implementation, and the user may never notice. But three seconds--even three milli-seconds of ignored input has no part in any high-priority task in real-time system I have ever worked with, but such things--usually with much shorter fuses--can be useful for lower priorty tasks. > I would guess that most of the tasking part of the run-time will > be written in Ada - so will be required to either conform to the > safety-critical subset in use or be re-written to that sub-set. > Common restrictions include no access types and no heap usage - is > this likely to be a problem? [One problem of working with small > subsets is that you end up knowing nothing about the rest of the > language, it's about six years since I last saw an Ada task > anywhere other than a text-book or journal.] See the above, not only no problem, but expected in many environments. > There is a related issue that doesn't come up very often when discussing > scheduling strategies, which is the accuracy of the worst-case execution > times used in the analysis. Deriving these figures requires a major > effort, with substantial error bounds on the resulting timings. I > believe that the figures that I currently use are typically in the range > 10%-30% pessimistic and I wouldn't be happy to use figures with a lower > margin of error unless I can believe that their accuracy is improved. Uh, a couple of points here. There are fudge factors and fudge factors. I've worked with code where the key timings were known to the accuracy of the clock, and ate up over 95% of the time available. Better have very predictable processors for that. (And be near end-of-life for the system. The one thing we know more than anything is that upgrades very seldom run faster.) On some of the same applications, the low priority task timings are known, and knowable with much less accuracy. You either live with generous margins, or design the system so that overruns in the low priority tasks cannot affect the higher priority tasks. Hard to do in cyclic executives (but I have seen it done), much easier with rate monotonic. Second, even cyclic executives need some slop for schedulability, often one third or more of the cycles. (You schedule a cyclic executive based on worst-case timings in all tasks, even if two events are mutually exclusive--or you HOPE they are--such as landing and weapons release.) With RMA you can often do much better, but the more important thing is to be able to prove you haven't done worse. > I can't get excited about more elaborate scheduling strategies to > sqeeze another 5% out of the processor with safety margins like > this (I'd sooner put it into more accurate timing figures). Exactly. But I have seen cyclic executives with 5% duty cycles where pushing them any harder means failing scheduling. I'd much rather approve a system with a 40% load and RMA "guaranteeing" schedulability at twice that, than a cyclic with a 30% load and crossed fingers. > I was beginning to wonder whether I was the only reader of cla still > using Ada 83 until there were some recent posts from others in the same > situation in another thread. To put my situation more clearly, there is > one safety-critical system going into system design later this year, > first trails to be run in 1998 and delivery to the customer in 2000 > onwards - this system will use Ada 83 as the Ada 95 compilers won't be > usable in that timescale. For such a system, switching to Ada 95 may make sense. If you are not yet into coding, and with no design changes, you should see some reduction in code costs. But you could be taking a risk if you require ACVC 2.1 validation. A small risk, since there should be several 2.1 validated compilers next year, but I guess it depends on when you need the validated version by. If this is not a DoD system, then other validations may matter more. -- Robert I. Eachus with Standard_Disclaimer; use Standard_Disclaimer; function Message (Text: in Clever_Ideas) return Better_Ideas is...