From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public From: donh@syd.csa.com.au (Don Harrison) Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/08/14 Message-ID: X-Deja-AN: 264779835 Sender: news@syd.csa.com.au X-Nntp-Posting-Host: dev50 References: <33F26189.B3E@flash.net> Organization: CSC Australia, Sydney Reply-To: donh@syd.csa.com.au Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-08-14T00:00:00+00:00 List-Id: Ken Garlington wrote: :Don Harrison wrote: :> :> Ken asked for evidence that SCOOP supports thread-level timing. :> :> Ken Garlington wrote: :> :> Why are they collectively locked? :> Answer: So that processing can be performed under known conditions without :> interference from other threads (helping to avoid race conditions). Also, :> so that we know up-front that we have all the resources we needed (helping :> to avoid deadlock). : :What happens if the resource is unavailable (already locked by a :lower-priority thread, for example)? The caller blocks until the resource is available. There are actually no lower-priority threads - all threads are the same priority and their requests are queued impartially. Where timing is not an issue this gives you what you want because you get a kind of statistical "prioritisation". What I mean is that threads will automatically get serviced by a supplier object in proportion to the number of calls they make to it. In this situation, you don't care which threads are serviced first. You only care about whether there is enough processing capacity and if there isn't enough, you have a problem regardless of what queueing scheme is used. Where timing *is* an issue, you use express messages. This mechanism allows threads to dynamically change their "priority" relative to other threads. This mechanism should only be used where it's critical to transfer control. There are two parties involved - the holder and the challenger - and they "duel" for control of a resource. Duels either result in a transfer of control or the challenger waiting till the holder finishes with the resource. Threads change their relative "priority" through calls to library routines: Holder: - retain (the default) makes the holder resist a challenge. - yield makes the holder willing to yield. Challenger: - wait_turn (the default) makes the challenger willing to wait. - demand makes the challenger mount a challenge and commit suicide if resisted. - insist makes the challenger mount a challenge but wait if resisted. The outcome of duels is summarised in the following table: Challenger ----------------------------------------------------------------- | [wait_turn] demand insist ----------|----------------------------------------------------------------- Holder | [retain] | [Challenger waits] Exception in Challenger Challenger waits | yield | Challenger waits Exception in holder Exception in holder Notes: 1) Defaults are exclosed in square brackets. Note the default behaviour of the challenger waiting. 2) This simple dual priority scheme can be embellished to offer multiple relative "priorities". Examples of use: - A thread which absolutely must execute (possibly driven by a timer) can issue a challenge for a resource. - A thread performing a long IO operation can offer to yield to a more critical thread. :It seems to me that, locking _all_ objects involved in an operation (without :regard to whether they guard a sensitive resource, such as a data store :that can be written by multiple threads) means that the latencies associated :with the threads goes up dramatically, particularly if the locking is :transitive. I was also concerned about this issue initially but came to the conclusion that objects would be locked for just as long as they (safely) need to be. Where it's important to release a frequently used shared resource quickly, various design strategies can be applied to minimise locking. :Priority inversion would also seem to be much more likely. Can you explain what you mean by this? :Ada has a different approach. Consider a fairly simple case of a value that :is read by some tasks, written by others. Clearly, if a thread is writing :a value, it should start and complete that operation before any other :access to that resource. Similarly, any attempt to read the resource should :start and complete before any write to the resource begins. Agree. :However, :a read by a low-priority task can be suspended to permit a higher-priority :task to read the same resouirce (assuming no side-effects of the read). :As I understand it, you would prohibit this benign "interference." Thus, :the high-priority task (which may run much more frequently than the :low-priority task) is unnecessarily delayed, potentially causing it :to meet it's deadline. No. In this case, the reads could occur concurrently (due to optimisations). :You claimed that the compiler could optimize the timing properties of :the system; I would be interested in seeing such a compiler.. I don't think it would be too hard. The compiler just has to verify that queries (functions) are truly benign and implement two types of locks - read and write. This is yet another reason to strictly enforce side-effect- free functions. :It wouldn't catch the extry/exit time of do_something, but :that's not a big deal. At a higher level call, perhaps.. :I see you've missed the original :part of the thread. The poster suggested writing :assertions (post-conditions) on P, Q, R, and S as the :way to do this, which (as I said) won't work effectively. :Your approach is at the thread level, not the object :level, which is the way I would have done it as well. :So, in fact, we agree that the original post is in :error (I assume). Probably - can't remember what was said. :> will do the required actions and measure the elapsed time. :> :> Note that all four objects are locked for the block between "do" and "end". :> This means that there is no intervening time between calls to them and there :> is no opportunity for any other threads to jump in and make a mess of things. : :Just out of curiosity, can the "local" section make reference to these :objects? Can't recall (and couldn't see from a quick glance at OOSC-2) but do know you would be limited in what you could do. For example, a local separate object couldn't be the target of a call. Why do you ask? :Also, how is "do_something" attached to a hardware event? Not sure, but probably via a call to a library routine. :> Note that this offers more protection than Ada protected objects which are :> only locked on single object basis. As you will see later, this also removes :> the need for a "requeue" statement. : :True. However, you pay a pretty terrible price for this protection, as :far as I can tell. I can see several systems which literally could not :be implemented under these rules. They would likely be designed differently, though, by paying more attention to locking issues. :> :If you are saying that a thread (a series of object invocations) cannot be :> :interrupted at _any point_, then that pretty much eliminates :> :concurrency, doesn't it? :> :> No, it just means it's more controlled (and safer). There is a complementary :> library mechanism called "express messages" which allows pre-emption of threads. : :Perhaps this "express messages" is the backdoor which I have to believe :a real system would need. Correct. :Correct me if I'm wrong, but no other thread can run while do_something is :executing? On a single processor, yes. On a multiple processor, no. :Or are you saying it can be interrupted, just that no other thread :can access the objects referenced in the parameter list (or any objects :referenced in _their_ parameter lists? )? Correct. Have run out of time. Will respond to the rest tomorrow. Don. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Don Harrison donh@syd.csa.com.au