From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public From: Ken Garlington Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/08/20 Message-ID: <33FA7B1B.6576@flash.net> X-Deja-AN: 265455546 References: <33F521D0.26@flash.net> Reply-To: Ken.Garlington@computer.org Organization: Flashnet Communications, http://www.flash.net Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-08-20T00:00:00+00:00 List-Id: Don Harrison wrote: > > I think it depends what you use priorities for. If you use them for their > intended purpose of ensuring threads meet their deadlines, then they only > need be applied to threads that have deadlines. In feedback systems, it is a fundamental property that most of the threads have explicit deadlines, or the analog transfer function you've discretized will not be appropriately implemented. > The problem is many Ada developers misuse priorities to acheive correct > synchronisation. Realtime systems that misuse priorities in this way tend > (not surprisingly) to be very fragile, suffering from race conditions. > The basic problem with using priorities for syncronisation is that you cannot > guarantee correct behaviour. It is the *wrong* tool for the job. So, what's > the right tool? Possibly true, although I've never seen it done (for the obvious reason you cite!) > It's Design by Contract! Why? Because it allows you to express the necessary > (*blocking*) conditions which are assumed to exist when a concurrent operation > is executed. So can Ada. > We recall that when those assumptions are violated in a sequential > context, an exception is raised. In a concurrent context (ie. when the > precondition involves a separate object), they assume synchronisation semantics. > The caller blocks till another thread performs does something which causes the > precondition to now be satisfied. > > For example, an operation adding an item to a buffer: > > add_to_buffer (b: BUFFER; item: SOME_TYPE) is > require not b.full > do > b.add (item) > end > > Because b is non-separate (sequential), the precondition will fail if b is full. > > add_to_buffer (b: separate BUFFER; item: SOME_TYPE) is > require not b.full > do > b.add (item) > end > > Because b is separate (concurrent), the precondition will cause blocking until > another thread removes an item from the buffer (ie. b is not full). > > Any Ada developer reading this should immediately recognise the similarity > with guarded task entries and barriers on protected type entries. These > mechanisms, then, are additional ways (apart from predefined assertions (think > program_error etc.) and range-constrained subtypes) in which Ada provides some > support for DBC. In fact, because guards and barriers are *general* boolean > expressions, they are about as close as Ada gets to the generality of Eiifel's > support for DBC. > > Now, back to the priority issue.. > > Because, under SCOOP, you can apply such synchronisation assertions to any > separate object, you are able to specify precisely the assumptions relevant > to the operations offered by those objects and guarantee correct use. > This leaves priorities to be used for what their proper use - expressing > relative importance of threads for facilitating timeliness. Consequently, > I expect a system implemented using SCOOP would have more "priority-less" > threads. Unfortunately, systems that represent digital models of analog functions (e.g. flight controls, IRSs) will confound your expectations :) > > > :> Where timing *is* an issue, you use express messages. This mechanism allows > :> threads to dynamically change their "priority" relative to other threads. > :> This mechanism should only be used where it's critical to transfer control. > :> There are two parties involved - the holder and the challenger - and they "duel" > :> for control of a resource. Duels either result in a transfer of control or > :> the challenger waiting till the holder finishes with the resource. > : > :Does this mean that each thread has to know it's relative priority to all other > :threads (more to the point, thread interfaces)? > > No, but it does suggest the priority of a thread relative to all threads that > it interacts with must be known. That, to me, seems to be a problem with respect to encapsulating this information (which is what I want to do). > I'm not aware of any published information > about how this is intended to work but imagine it could use abslolute priorities > (so implying relative ones): > > set_priority (p: PRIORITY) is .. -- set absolute priority > yield (p: PRIORITY) is .. -- yield to priority p or greater > etc. > > :That would be a serious maintainability issue, I would think. > > Yes, if it were true - but it isn't. See above. If the relative priority of each interaction is distributed, then it's true. > > :> - A thread which absolutely must execute (possibly driven by a timer) can > :> issue a challenge for a resource. > : > :If, while another thread is operating, it cannot be interrrupted (as your > :previous note said, a thread cannot be interrupted even between object calls), > :then how does the other thread begin execution to issue the challenge? > > We're talking single processor here, of course (the issue doesn't arise with > multi-processors). An executing thread will give up the processor to another > thread if it blocks on a synchronisation condition (a precondition on a > separate object). Not in the real word of embedded systems. Think about non-maskable hardware interrupts. > Expanding the buffer example above: > > x: separate BUFFER > s: SOME_TYPE > ... > > do_something is > do > ... > add_to_buffer (x, s) > ... > end > > do_something may block on the call to add_to_buffer (because the buffer is full) > causing another thread to run. > > :> I was also concerned about this issue initially but came to the conclusion > :> that objects would be locked for just as long as they (safely) need to be. > :> Where it's important to release a frequently used shared resource quickly, > :> various design strategies can be applied to minimise locking. > : > :Don't these "design strategies" cause the same uncertainly that you said > :you didn't want? > > No. Why not? Can't the developer (whom you don't trust) fail to properly implement these "design strategies"? > Also, these techniques can be applied to *any* concurrent system - they > are not SCOOP-specific. Hence, they're not relevant to any discussion about > the relative merits of different concurrency models. Are there deployed systems that use these techniques? > > :For example, if a lower-priority thread agrees to give up control > :of an object to a higher-priority thread, then race conditions are possible > :if the threads are not designed properly. > > Assertions would help you to design them properly and avoid such conditions. See above. Your original position was that the compiler should take care of all of this, so that the designer would not make an error. Now you depend upon the designer to implement the timing process properly (and write the proper assertions; an issue which my Ariane response addresses in some detail). > Also, invariants will guarantee that an object yielding in a duel will end > up in a consistent state. > > :This seems counter-intuitive. Consider the simple protected object discussed > :earlier with the Read and Write operations. It would seem that the object > :itself, with its internal knowledge of how Read and Write works, would be > :the better place to control when blocking needs to occur. > > That *is* how blocking works under SCOOP. The operation of the supplier object > contains the precondition, so the supplier determines the (synchronisation) > contract. > > :In order for the > :threads that use this object to decide the outcome of their "duel", don't > :they have to know the internal operation of the object -- an object contract > :violation? > > No, only the object has to know (and how to restore its invariant - ie. how > to return to a consistent state). You're using the word "no" a lot without explanation. Two callers to the object decide between *themselves* who gets to maintain access. Now it's the called object that determines the outcome of the duel? You've pretty much worked me around in circles. I think I'll just wait until someone actually uses SCOOP to make up my mind. > > :> :Priority inversion would also seem to be much more likely. > :> > :> Can you explain what you mean by this? > : > :Priority inversion? When a low prioity task seizes a high prioity resource, > :it effectively operates at the priority of the seized resource. If a high > :priority task then attempts to seize the resource, it is effectively blocked > :by the lower priority task -- priority inversion. However, since you can in > :fact have a thread interrupt another thread (has to happen, otherwise there > :is no "dueling"), and since the higher-priority thread can seize the lower > :one, I assume this can be avoided. > > Correct. > > :However, it also opens the door to both mutual exclusion and deadlock. > > ..which are avoided (as in *any* concurrent system) by designing correctly. > > :> No. In this case, the reads could occur concurrently (due to optimisations). > : > :Describe the general-purpose algorithm used to determince this by a compiler. > > Ask Robert Dewar because that's what happens with protected types. :) ABSOLUTELY NOT. The developer of the protected type explicitly provides the criteria; there's no magic going on in the compiler. I think your understanding of Ada is at least as hazy as my understanding of SCOOP :) > > :> :You claimed that the compiler could optimize the timing properties of > :> :the system; I would be interested in seeing such a compiler.. > :> > :> I don't think it would be too hard. The compiler just has to verify that > :> queries (functions) are truly benign > : > :Define "benign"! For example, is a read of a memory location "benign"? > :Maybe not, if it's memory-mapped I/O. Some devices, for example, don't take > :kindly to starting a read and being interrupted in the middle to do a new > :read. > > This issue is common to any concurrency model. Which is why it's a situation that the _developer_, not the compiler, must address. It is also why the language has to give the _developer_ the flexibility to address it (even if he is still capable of shooting himself in the foot). > But the answer is simple: > "If it isn't benign, don't allow concurrent reads". To repeat: Define "benign!!!!!!!!" It has to be sufficiently loose to permit working systems, but tight enough to prohibit concurrency faults (your original claim). > > :You may not think it's too hard, but I suspect you haven't encountered > :many of these real-life systems. > > You could be right. Perhaps, I've learnt nothing from my 8 years realtime > experience (including 3 years hard realtime). (I think combat systems qualify > for hard realtime - at least, I found it quite hard. :) Absolutely not, if the combat systems (1) were not _embedded_ (which is where many of these problems come up), (2) had minimal OS support (I've seen workstation programmers with over ten years experience with realtime systems that have never had to worry about these issues), and (3) had to deal with hard realtime in the sense that the system _totally fails_ if a complex set of deadlines fail -- as opposed to the user getting irritated that the system is too slow. > > :> :Just out of curiosity, can the "local" section make reference to these > :> :objects? > :> > :> Can't recall (and couldn't see from a quick glance at OOSC-2) but do know > :> you would be limited in what you could do. For example, a local separate > :> object couldn't be the target of a call. Why do you ask? > : > :Because you said that they were locked between the "do" and "end". If they > :can be referenced in the local section, but are not locked, this would seem > :to be a Bad Thing. > > It appears the reattachment rules would ensure that it can only reference a > locked object. > > :> :Correct me if I'm wrong, but no other thread can run while do_something is > :> :executing? > :> > :> On a single processor, yes. > :> On a multiple processor, no. > > Sorry, this was misleading. Another thread can run if the original blocks. > > :> :Or are you saying it can be interrupted, just that no other thread > :> :can access the objects referenced in the parameter list (or any objects > :> :referenced in _their_ parameter lists? )? > :> > :> Correct. > : > :OK, this makes more sense. However, it does mean that my original comment > :is correct -- assertions at the object level cannot be added up to > :form the thread time, since threads can be interrupted. That's all > :I needed to know. > > Don. > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Don Harrison donh@syd.csa.com.au