From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public From: Ken Garlington Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/08/15 Message-ID: <33F521D0.26@flash.net> X-Deja-AN: 264423658 References: <33F26189.B3E@flash.net> Organization: Flashnet Communications, http://www.flash.net Reply-To: Ken.Garlington@computer.org Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-08-15T00:00:00+00:00 List-Id: Don Harrison wrote: > > The caller blocks until the resource is available. There are actually no > lower-priority threads - all threads are the same priority and their requests > are queued impartially. Where timing is not an issue this gives you what you > want because you get a kind of statistical "prioritisation". What I mean is > that threads will automatically get serviced by a supplier object in proportion > to the number of calls they make to it. In this situation, you don't care > which threads are serviced first. You only care about whether there is enough > processing capacity and if there isn't enough, you have a problem regardless > of what queueing scheme is used. That's true iff the threads are all the same priority. I've yet to work on a real system where this is the case, so this doesn't really help much... > > Where timing *is* an issue, you use express messages. This mechanism allows > threads to dynamically change their "priority" relative to other threads. > This mechanism should only be used where it's critical to transfer control. > There are two parties involved - the holder and the challenger - and they "duel" > for control of a resource. Duels either result in a transfer of control or > the challenger waiting till the holder finishes with the resource. Does this mean that each thread has to know it's relative priority to all other threads (more to the point, thread interfaces)? That would be a serious maintainability issue, I would think. You might want to look at some of the real-time projet examples in the ADARTS courseware, for example. > - A thread which absolutely must execute (possibly driven by a timer) can > issue a challenge for a resource. If, while another thread is operating, it cannot be interrrupted (as your previous note said, a thread cannot be interrupted even between object calls), then how does the other thread begin execution to issue the challenge? > I was also concerned about this issue initially but came to the conclusion > that objects would be locked for just as long as they (safely) need to be. > Where it's important to release a frequently used shared resource quickly, > various design strategies can be applied to minimise locking. Don't these "design strategies" cause the same uncertainly that you said you didn't want? For example, if a lower-priority thread agrees to give up control of an object to a higher-priority thread, then race conditions are possible if the threads are not designed properly. This seems counter-intuitive. Consider the simple protected object discussed earlier with the Read and Write operations. It would seem that the object itself, with its internal knowledge of how Read and Write works, would be the better place to control when blocking needs to occur. In order for the threads that use this object to decide the outcome of their "duel", don't they have to know the internal operation of the object -- an object contract violation? > > :Priority inversion would also seem to be much more likely. > > Can you explain what you mean by this? Priority inversion? When a low prioity task seizes a high prioity resource, it effectively operates at the priority of the seized resource. If a high priority task then attempts to seize the resource, it is effectively blocked by the lower priority task -- priority inversion. However, since you can in fact have a thread interrupt another thread (has to happen, otherwise there is no "dueling"), and since the higher-priority thread can seize the lower one, I assume this can be avoided. However, it also opens the door to both mutual exclusion and deadlock. > :Ada has a different approach. Consider a fairly simple case of a value that > :is read by some tasks, written by others. Clearly, if a thread is writing > :a value, it should start and complete that operation before any other > :access to that resource. Similarly, any attempt to read the resource should > :start and complete before any write to the resource begins. > > Agree. > > :However, > :a read by a low-priority task can be suspended to permit a higher-priority > :task to read the same resouirce (assuming no side-effects of the read). > :As I understand it, you would prohibit this benign "interference." Thus, > :the high-priority task (which may run much more frequently than the > :low-priority task) is unnecessarily delayed, potentially causing it > :to meet it's deadline. > > No. In this case, the reads could occur concurrently (due to optimisations). Describe the general-purpose algorithm used to determince this by a compiler. If you can, you'll put a lot of system architects out of business! Has such an algorithm been implemented yet? I would like to test this wonderful tool! > :You claimed that the compiler could optimize the timing properties of > :the system; I would be interested in seeing such a compiler.. > > I don't think it would be too hard. The compiler just has to verify that > queries (functions) are truly benign Define "benign"! For example, is a read of a memory location "benign"? Maybe not, if it's memory-mapped I/O. Some devices, for example, don't take kindly to starting a read and being interrupted in the middle to do a new read. You may not think it's too hard, but I suspect you haven't encountered many of these real-life systems. > and implement two types of locks > - read and write. This is yet another reason to strictly enforce side-effect- > free functions. > > :It wouldn't catch the extry/exit time of do_something, but > :that's not a big deal. > > At a higher level call, perhaps.. > > :I see you've missed the original > :part of the thread. The poster suggested writing > :assertions (post-conditions) on P, Q, R, and S as the > :way to do this, which (as I said) won't work effectively. > :Your approach is at the thread level, not the object > :level, which is the way I would have done it as well. > :So, in fact, we agree that the original post is in > :error (I assume). > > Probably - can't remember what was said. > > :> will do the required actions and measure the elapsed time. > :> > :> Note that all four objects are locked for the block between "do" and "end". > :> This means that there is no intervening time between calls to them and there > :> is no opportunity for any other threads to jump in and make a mess of things. > : > :Just out of curiosity, can the "local" section make reference to these > :objects? > > Can't recall (and couldn't see from a quick glance at OOSC-2) but do know > you would be limited in what you could do. For example, a local separate > object couldn't be the target of a call. Why do you ask? Because you said that they were locked between the "do" and "end". If they can be referenced in the local section, but are not locked, this would seem to be a Bad Thing. > :Also, how is "do_something" attached to a hardware event? > > Not sure, but probably via a call to a library routine. > > :> Note that this offers more protection than Ada protected objects which are > :> only locked on single object basis. As you will see later, this also removes > :> the need for a "requeue" statement. > : > :True. However, you pay a pretty terrible price for this protection, as > :far as I can tell. I can see several systems which literally could not > :be implemented under these rules. > > They would likely be designed differently, though, by paying more attention > to locking issues. It's difficult to communicate to someone without the experience, but it's much like asking someone to do a periodic task without reference to any timer, clock, etc. It's theoretically possible, but in practice such an implementation is pretty much impossible. > > :> :If you are saying that a thread (a series of object invocations) cannot be > :> :interrupted at _any point_, then that pretty much eliminates > :> :concurrency, doesn't it? > :> > :> No, it just means it's more controlled (and safer). There is a complementary > :> library mechanism called "express messages" which allows pre-emption of threads. > : > :Perhaps this "express messages" is the backdoor which I have to believe > :a real system would need. > > Correct. > > :Correct me if I'm wrong, but no other thread can run while do_something is > :executing? > > On a single processor, yes. > On a multiple processor, no. > > :Or are you saying it can be interrupted, just that no other thread > :can access the objects referenced in the parameter list (or any objects > :referenced in _their_ parameter lists? )? > > Correct. OK, this makes more sense. However, it does mean that my original comment is correct -- assertions at the object level cannot be added up to form the thread time, since threads can be interrupted. That's all I needed to know. > > Have run out of time. Will respond to the rest tomorrow. > > Don. > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Don Harrison donh@syd.csa.com.au