From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public From: donh@syd.csa.com.au (Don Harrison) Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/08/19 Message-ID: X-Deja-AN: 265168458 Sender: news@syd.csa.com.au X-Nntp-Posting-Host: dev50 References: <33F521D0.26@flash.net> Organization: CSC Australia, Sydney Reply-To: donh@syd.csa.com.au Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-08-19T00:00:00+00:00 List-Id: Ken Garlington wrote: :Don Harrison wrote: :> :> The caller blocks until the resource is available. There are actually no :> lower-priority threads - all threads are the same priority and their requests :> are queued impartially. Where timing is not an issue this gives you what you :> want because you get a kind of statistical "prioritisation". What I mean is :> that threads will automatically get serviced by a supplier object in proportion :> to the number of calls they make to it. In this situation, you don't care :> which threads are serviced first. You only care about whether there is enough :> processing capacity and if there isn't enough, you have a problem regardless :> of what queueing scheme is used. : :That's true iff the threads are all the same priority. I've yet to work :on a real system where this is the case, so this doesn't really help much... I think it depends what you use priorities for. If you use them for their intended purpose of ensuring threads meet their deadlines, then they only need be applied to threads that have deadlines. The problem is many Ada developers misuse priorities to acheive correct synchronisation. Realtime systems that misuse priorities in this way tend (not surprisingly) to be very fragile, suffering from race conditions. The basic problem with using priorities for syncronisation is that you cannot guarantee correct behaviour. It is the *wrong* tool for the job. So, what's the right tool? It's Design by Contract! Why? Because it allows you to express the necessary (*blocking*) conditions which are assumed to exist when a concurrent operation is executed. We recall that when those assumptions are violated in a sequential context, an exception is raised. In a concurrent context (ie. when the precondition involves a separate object), they assume synchronisation semantics. The caller blocks till another thread performs does something which causes the precondition to now be satisfied. For example, an operation adding an item to a buffer: add_to_buffer (b: BUFFER; item: SOME_TYPE) is require not b.full do b.add (item) end Because b is non-separate (sequential), the precondition will fail if b is full. add_to_buffer (b: separate BUFFER; item: SOME_TYPE) is require not b.full do b.add (item) end Because b is separate (concurrent), the precondition will cause blocking until another thread removes an item from the buffer (ie. b is not full). Any Ada developer reading this should immediately recognise the similarity with guarded task entries and barriers on protected type entries. These mechanisms, then, are additional ways (apart from predefined assertions (think program_error etc.) and range-constrained subtypes) in which Ada provides some support for DBC. In fact, because guards and barriers are *general* boolean expressions, they are about as close as Ada gets to the generality of Eiifel's support for DBC. Now, back to the priority issue.. Because, under SCOOP, you can apply such synchronisation assertions to any separate object, you are able to specify precisely the assumptions relevant to the operations offered by those objects and guarantee correct use. This leaves priorities to be used for what their proper use - expressing relative importance of threads for facilitating timeliness. Consequently, I expect a system implemented using SCOOP would have more "priority-less" threads. :> Where timing *is* an issue, you use express messages. This mechanism allows :> threads to dynamically change their "priority" relative to other threads. :> This mechanism should only be used where it's critical to transfer control. :> There are two parties involved - the holder and the challenger - and they "duel" :> for control of a resource. Duels either result in a transfer of control or :> the challenger waiting till the holder finishes with the resource. : :Does this mean that each thread has to know it's relative priority to all other :threads (more to the point, thread interfaces)? No, but it does suggest the priority of a thread relative to all threads that it interacts with must be known. I'm not aware of any published information about how this is intended to work but imagine it could use abslolute priorities (so implying relative ones): set_priority (p: PRIORITY) is .. -- set absolute priority yield (p: PRIORITY) is .. -- yield to priority p or greater etc. :That would be a serious maintainability issue, I would think. Yes, if it were true - but it isn't. :> - A thread which absolutely must execute (possibly driven by a timer) can :> issue a challenge for a resource. : :If, while another thread is operating, it cannot be interrrupted (as your :previous note said, a thread cannot be interrupted even between object calls), :then how does the other thread begin execution to issue the challenge? We're talking single processor here, of course (the issue doesn't arise with multi-processors). An executing thread will give up the processor to another thread if it blocks on a synchronisation condition (a precondition on a separate object). Expanding the buffer example above: x: separate BUFFER s: SOME_TYPE ... do_something is do ... add_to_buffer (x, s) ... end do_something may block on the call to add_to_buffer (because the buffer is full) causing another thread to run. :> I was also concerned about this issue initially but came to the conclusion :> that objects would be locked for just as long as they (safely) need to be. :> Where it's important to release a frequently used shared resource quickly, :> various design strategies can be applied to minimise locking. : :Don't these "design strategies" cause the same uncertainly that you said :you didn't want? No. Also, these techniques can be applied to *any* concurrent system - they are not SCOOP-specific. Hence, they're not relevant to any discussion about the relative merits of different concurrency models. :For example, if a lower-priority thread agrees to give up control :of an object to a higher-priority thread, then race conditions are possible :if the threads are not designed properly. Assertions would help you to design them properly and avoid such conditions. Also, invariants will guarantee that an object yielding in a duel will end up in a consistent state. :This seems counter-intuitive. Consider the simple protected object discussed :earlier with the Read and Write operations. It would seem that the object :itself, with its internal knowledge of how Read and Write works, would be :the better place to control when blocking needs to occur. That *is* how blocking works under SCOOP. The operation of the supplier object contains the precondition, so the supplier determines the (synchronisation) contract. :In order for the :threads that use this object to decide the outcome of their "duel", don't :they have to know the internal operation of the object -- an object contract :violation? No, only the object has to know (and how to restore its invariant - ie. how to return to a consistent state). :> :Priority inversion would also seem to be much more likely. :> :> Can you explain what you mean by this? : :Priority inversion? When a low prioity task seizes a high prioity resource, :it effectively operates at the priority of the seized resource. If a high :priority task then attempts to seize the resource, it is effectively blocked :by the lower priority task -- priority inversion. However, since you can in :fact have a thread interrupt another thread (has to happen, otherwise there :is no "dueling"), and since the higher-priority thread can seize the lower :one, I assume this can be avoided. Correct. :However, it also opens the door to both mutual exclusion and deadlock. ..which are avoided (as in *any* concurrent system) by designing correctly. :> No. In this case, the reads could occur concurrently (due to optimisations). : :Describe the general-purpose algorithm used to determince this by a compiler. Ask Robert Dewar because that's what happens with protected types. :) :> :You claimed that the compiler could optimize the timing properties of :> :the system; I would be interested in seeing such a compiler.. :> :> I don't think it would be too hard. The compiler just has to verify that :> queries (functions) are truly benign : :Define "benign"! For example, is a read of a memory location "benign"? :Maybe not, if it's memory-mapped I/O. Some devices, for example, don't take :kindly to starting a read and being interrupted in the middle to do a new :read. This issue is common to any concurrency model. But the answer is simple: "If it isn't benign, don't allow concurrent reads". :You may not think it's too hard, but I suspect you haven't encountered :many of these real-life systems. You could be right. Perhaps, I've learnt nothing from my 8 years realtime experience (including 3 years hard realtime). (I think combat systems qualify for hard realtime - at least, I found it quite hard. :) :> :Just out of curiosity, can the "local" section make reference to these :> :objects? :> :> Can't recall (and couldn't see from a quick glance at OOSC-2) but do know :> you would be limited in what you could do. For example, a local separate :> object couldn't be the target of a call. Why do you ask? : :Because you said that they were locked between the "do" and "end". If they :can be referenced in the local section, but are not locked, this would seem :to be a Bad Thing. It appears the reattachment rules would ensure that it can only reference a locked object. :> :Correct me if I'm wrong, but no other thread can run while do_something is :> :executing? :> :> On a single processor, yes. :> On a multiple processor, no. Sorry, this was misleading. Another thread can run if the original blocks. :> :Or are you saying it can be interrupted, just that no other thread :> :can access the objects referenced in the parameter list (or any objects :> :referenced in _their_ parameter lists? )? :> :> Correct. : :OK, this makes more sense. However, it does mean that my original comment :is correct -- assertions at the object level cannot be added up to :form the thread time, since threads can be interrupted. That's all :I needed to know. Don. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Don Harrison donh@syd.csa.com.au