From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,2c6139ce13be9980 X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,2c6139ce13be9980 X-Google-Attributes: gid1108a1,public X-Google-Thread: f43e6,2c6139ce13be9980 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,3d3f20d31be1c33a X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Safety-critical development in Ada and Eiffel Date: 1997/08/13 Message-ID: <33F26189.B3E@flash.net> X-Deja-AN: 264498272 References: <33E9AF97.296F@flash.net> Organization: Flashnet Communications, http://www.flash.net Reply-To: Ken.Garlington@computer.org Newsgroups: comp.object,comp.software-eng,comp.lang.ada,comp.lang.eiffel Date: 1997-08-13T00:00:00+00:00 List-Id: Don Harrison wrote: > > Ken asked for evidence that SCOOP supports thread-level timing. > > Ken Garlington wrote: > > Why are they collectively locked? > Answer: So that processing can be performed under known conditions without > interference from other threads (helping to avoid race conditions). Also, > so that we know up-front that we have all the resources we needed (helping > to avoid deadlock). What happens if the resource is unavailable (already locked by a lower-priority thread, for example)? It seems to me that, locking _all_ objects involved in an operation (without regard to whether they guard a sensitive resource, such as a data store that can be written by multiple threads) means that the latencies associated with the threads goes up dramatically, particularly if the locking is transitive. Priority inversion would also seem to be much more likely. Ada has a different approach. Consider a fairly simple case of a value that is read by some tasks, written by others. Clearly, if a thread is writing a value, it should start and complete that operation before any other access to that resource. Similarly, any attempt to read the resource should start and complete before any write to the resource begins. However, a read by a low-priority task can be suspended to permit a higher-priority task to read the same resouirce (assuming no side-effects of the read). As I understand it, you would prohibit this benign "interference." Thus, the high-priority task (which may run much more frequently than the low-priority task) is unnecessarily delayed, potentially causing it to meet it's deadline. You claimed that the compiler could optimize the timing properties of the system; I would be interested in seeing such a compiler. Today, there are tools that can help the designer analyze the system; I have yet to see one that could do a real-time architecture design autonomously! > > When are they collectively locked? > Answer: When they're supplied as parameters to an operation. > > What if they're not supplied as parameters? > Answer: They always *are* (in the case of concurrent (separate) objects. > In the case of sequential (non-separate) object, we don't care because > we know they're local to the calling thread. > > See the example below.. > > :> :Futhermore, here is a counter-argument to writing timing assertions > :> :at the object level: > :> : > :> :Using letters to denote objects, let's build a thread. The notation > :> :x{object}y means that the object is executed between x and y times. > :> :The arrows indicate that one object is executed before another object > :> :in the sequence. (For simplicity, we'll neglect objects calling > :> :objects). > :> : > :> :Thread: A -> B -> 1{C}4 -> D > :> : > :> :The max time for this thread is: A + B + 4*C + D, right? > :> : > :> :No! > :> : > :> :It also includes: > :> : 1. the entry/exit code for each object (4 times that, for C), > :> : 2. the time to start and stop the thread itself > :> : 3. Any time between object invocations "stolen" by other threads. > :> > :> 3) doesn't apply to SCOOP because all objects are locked. > : > :(3) represents the time _in between_ object invocations. > > You might code your example in pseudo-SCOOP: > > Assuming separate (allocated to different threads) entities > > my_p: separate P > my_q: separate Q > my_r: separate R > my_s: separate S > > and operation > > do_something (p: separate P; q: separate Q; r: separate R; s: separate S) is > local > start_time: time > duration: ... > do > start_time := time_now > p.a > q.b > loop (4 times) > r.c > end > s.d > duration := time_now - start_time > ... > end > > the call > > do_something (my_p, my_q, my_r, my_s) It wouldn't catch the extry/exit time of do_something, but that's not a big deal. I see you've missed the original part of the thread. The poster suggested writing assertions (post-conditions) on P, Q, R, and S as the way to do this, which (as I said) won't work effectively. Your approach is at the thread level, not the object level, which is the way I would have done it as well. So, in fact, we agree that the original post is in error (I assume). > > will do the required actions and measure the elapsed time. > > Note that all four objects are locked for the block between "do" and "end". > This means that there is no intervening time between calls to them and there > is no opportunity for any other threads to jump in and make a mess of things. Just out of curiosity, can the "local" section make reference to these objects? Also, how is "do_something" attached to a hardware event? What are the implications of doing so? > Note that this offers more protection than Ada protected objects which are > only locked on single object basis. As you will see later, this also removes > the need for a "requeue" statement. True. However, you pay a pretty terrible price for this protection, as far as I can tell. I can see several systems which literally could not be implemented under these rules. > > :If you are saying that a thread (a series of object invocations) cannot be > :interrupted at _any point_, then that pretty much eliminates > :concurrency, doesn't it? > > No, it just means it's more controlled (and safer). There is a complementary > library mechanism called "express messages" which allows pre-emption of threads. Perhaps this "express messages" is the backdoor which I have to believe a real system would need. Correct me if I'm wrong, but no other thread can run while do_something is executing? Or are you saying it can be interrupted, just that no other thread can access the objects referenced in the parameter list (or any objects referenced in _their_ parameter lists? )? > > :I have a feeling that we're just not communicating on this issue. > > True. > > :My experience is in real-time systems, and I think I'm just not > :properly conveying the issues involved in developing those systems. > > Likewise. > > :> :If you start the measurement before the call to A, and stop it > :> :after the call to D, you only have to worry about #2 above > :> :(which is usually fixed). If you do the measurement at the object > :> :level, all three will skew your results. > :> > :> Not true. If you really want to get a clue on this, I suggest you get hold > :> of OOSC-2 and read the chapter on concurrency. > > Sorry, Ken. I was a bit rude here. A little confusing, too, since you _do_ in fact start the measurement befoire the call to A (or P.A) and after the last call in your example! > > :Ken Garlington wrote (about Ada protected types): > > ::can requeue requests, > > I wrote: > > :IMO, the situations in which you would use "requeue" are better handled by > :designing differently - perhaps by using an additional class. > > Sorry, this is wrong. The problem in Ada which "requeue" is designed to deal > with doesn't arise in SCOOP because successive calls to the same separate > object are atomic. That is, the object doesn't get unlocked between calls. True. However, the question still stands: What happens if a thread attempts to access a locked (or otherwise unavailable) resource? I can use requeue in these situations; what do you do? > For example, > > do_something (a: separate A) is > do > a.do_x > a.do_y > end > > does exactly what we expect. There is no need for do_x to "requeue" do_y. That's not how requeue works, as I understand it. The issue is more: you attempt to do a.do_x, and someone else has already seized the resource, or there is some other time-based reason why a.do_x is unavailable. > IMO, "requeue" is a wokaround to a design flaw in Ada's concurrency - namely > locking at the object level. Further, it's a *deficient* workaround because > supplier objects are forced to do things (do_y) that should be the > responsibility of clients (do_something). It depends. A "requeue" may be needed because of the state of the object, and may not be related to the state of do_something. > :If anything, "requeue" probably encourages poor design. > > .. as I've just explained. Sorry, don't see it. I think you would need to apply SCOOP to some fairly complex real-time systems to see the problems I'm discussing. So long as all the objects have short lifetimes, and no priority issues are around, I would assume it seems to work fine for simple systems. > > Don. > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Don Harrison donh@syd.csa.com.au